Full Code of awslabs/mxnet-model-server for AI

master 706aa9c75557 cached
466 files
1.8 MB
457.3k tokens
1721 symbols
1 requests
Download .txt
Showing preview only (2,000K chars total). Download the full file or copy to clipboard to get everything.
Repository: awslabs/mxnet-model-server
Branch: master
Commit: 706aa9c75557
Files: 466
Total size: 1.8 MB

Directory structure:
gitextract_mh90caup/

├── .circleci/
│   ├── README.md
│   ├── config.yml
│   ├── images/
│   │   ├── Dockerfile.python2.7
│   │   └── Dockerfile.python3.6
│   └── scripts/
│       ├── linux_build.sh
│       ├── linux_test_api.sh
│       ├── linux_test_benchmark.sh
│       ├── linux_test_modelarchiver.sh
│       ├── linux_test_perf_regression.sh
│       └── linux_test_python.sh
├── .coveragerc
├── .github/
│   └── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── LICENSE
├── LICENSE.txt
├── MANIFEST.in
├── PyPiDescription.rst
├── README.md
├── _config.yml
├── benchmarks/
│   ├── README.md
│   ├── benchmark.py
│   ├── install_dependencies.sh
│   ├── jmx/
│   │   ├── concurrentLoadPlan.jmx
│   │   ├── concurrentScaleCalls.jmx
│   │   ├── graphsGenerator.jmx
│   │   ├── imageInputModelPlan.jmx
│   │   ├── multipleModelsLoadPlan.jmx
│   │   ├── pingPlan.jmx
│   │   └── textInputModelPlan.jmx
│   ├── lstm_ip.json
│   ├── mac_install_dependencies.sh
│   ├── noop_ip.txt
│   └── upload_results_to_s3.sh
├── ci/
│   ├── Dockerfile.python2.7
│   ├── Dockerfile.python3.6
│   ├── README.md
│   ├── buildspec.yml
│   ├── dockerd-entrypoint.sh
│   └── m2-settings.xml
├── docker/
│   ├── Dockerfile.cpu
│   ├── Dockerfile.gpu
│   ├── Dockerfile.nightly-cpu
│   ├── Dockerfile.nightly-gpu
│   ├── README.md
│   ├── advanced-dockerfiles/
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7.nightly
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6.nightly
│   │   ├── Dockerfile.base.ubuntu_16_04.py2_7
│   │   ├── Dockerfile.base.ubuntu_16_04.py2_7.nightly
│   │   ├── Dockerfile.base.ubuntu_16_04.py3_6
│   │   ├── Dockerfile.base.ubuntu_16_04.py3_6.nightly
│   │   ├── config.properties
│   │   └── dockerd-entrypoint.sh
│   ├── advanced_settings.md
│   ├── config.properties
│   └── dockerd-entrypoint.sh
├── docs/
│   ├── README.md
│   ├── batch_inference_with_mms.md
│   ├── configuration.md
│   ├── custom_service.md
│   ├── elastic_inference.md
│   ├── images/
│   │   └── helpers/
│   │       └── plugins_sdk_class_uml_diagrams.puml
│   ├── inference_api.md
│   ├── install.md
│   ├── logging.md
│   ├── management_api.md
│   ├── metrics.md
│   ├── migration.md
│   ├── mms_endpoint_plugins.md
│   ├── mms_on_fargate.md
│   ├── model_zoo.md
│   ├── rest_api.md
│   └── server.md
├── examples/
│   ├── README.md
│   ├── densenet_pytorch/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── densenet_service.py
│   │   └── index_to_name.json
│   ├── gluon_alexnet/
│   │   ├── README.md
│   │   ├── gluon_hybrid_alexnet.py
│   │   ├── gluon_imperative_alexnet.py
│   │   ├── gluon_pretrained_alexnet.py
│   │   ├── signature.json
│   │   └── synset.txt
│   ├── gluon_character_cnn/
│   │   ├── README.md
│   │   ├── gluon_crepe.py
│   │   ├── signature.json
│   │   └── synset.txt
│   ├── lstm_ptb/
│   │   ├── README.md
│   │   └── lstm_ptb_service.py
│   ├── metrics_cloudwatch/
│   │   ├── __init__.py
│   │   └── metric_push_example.py
│   ├── model_service_template/
│   │   ├── gluon_base_service.py
│   │   ├── model_handler.py
│   │   ├── mxnet_model_service.py
│   │   ├── mxnet_utils/
│   │   │   ├── __init__.py
│   │   │   ├── image.py
│   │   │   ├── ndarray.py
│   │   │   └── nlp.py
│   │   ├── mxnet_vision_batching.py
│   │   └── mxnet_vision_service.py
│   ├── mxnet_vision/
│   │   └── README.md
│   ├── sockeye_translate/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── config/
│   │   │   └── config.properties
│   │   ├── model_handler.py
│   │   ├── preprocessor.py
│   │   └── sockeye_service.py
│   └── ssd/
│       ├── README.md
│       ├── example_outputs.md
│       └── ssd_service.py
├── frontend/
│   ├── .gitignore
│   ├── README.md
│   ├── build.gradle
│   ├── cts/
│   │   ├── build.gradle
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   └── cts/
│   │           │                       ├── Cts.java
│   │           │                       ├── HttpClient.java
│   │           │                       └── ModelInfo.java
│   │           └── resources/
│   │               └── log4j2.xml
│   ├── gradle/
│   │   └── wrapper/
│   │       ├── gradle-wrapper.jar
│   │       └── gradle-wrapper.properties
│   ├── gradle.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── modelarchive/
│   │   ├── build.gradle
│   │   └── src/
│   │       ├── main/
│   │       │   └── java/
│   │       │       └── com/
│   │       │           └── amazonaws/
│   │       │               └── ml/
│   │       │                   └── mms/
│   │       │                       └── archive/
│   │       │                           ├── DownloadModelException.java
│   │       │                           ├── Hex.java
│   │       │                           ├── InvalidModelException.java
│   │       │                           ├── LegacyManifest.java
│   │       │                           ├── Manifest.java
│   │       │                           ├── ModelArchive.java
│   │       │                           ├── ModelException.java
│   │       │                           ├── ModelNotFoundException.java
│   │       │                           └── ZipUtils.java
│   │       └── test/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   ├── archive/
│   │           │                   │   ├── CoverageTest.java
│   │           │                   │   ├── Exporter.java
│   │           │                   │   └── ModelArchiveTest.java
│   │           │                   └── test/
│   │           │                       └── TestHelper.java
│   │           └── resources/
│   │               └── models/
│   │                   ├── custom-return-code/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── error_batch/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── init-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── invalid_service.py
│   │                   ├── invalid/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── invalid_service.py
│   │                   ├── loading-memory-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── logging/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── noop-no-manifest/
│   │                   │   └── service.py
│   │                   ├── noop-v0.1/
│   │                   │   ├── MANIFEST.json
│   │                   │   ├── noop_service.py
│   │                   │   └── signature.json
│   │                   ├── noop-v1.0/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── noop-v1.0-config-tests/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── prediction-memory-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   └── respheader-test/
│   │                       ├── MAR-INF/
│   │                       │   └── MANIFEST.json
│   │                       └── service.py
│   ├── server/
│   │   ├── build.gradle
│   │   └── src/
│   │       ├── main/
│   │       │   ├── java/
│   │       │   │   └── com/
│   │       │   │       └── amazonaws/
│   │       │   │           └── ml/
│   │       │   │               └── mms/
│   │       │   │                   ├── ModelServer.java
│   │       │   │                   ├── ServerInitializer.java
│   │       │   │                   ├── http/
│   │       │   │                   │   ├── ApiDescriptionRequestHandler.java
│   │       │   │                   │   ├── BadRequestException.java
│   │       │   │                   │   ├── ConflictStatusException.java
│   │       │   │                   │   ├── DescribeModelResponse.java
│   │       │   │                   │   ├── ErrorResponse.java
│   │       │   │                   │   ├── HttpRequestHandler.java
│   │       │   │                   │   ├── HttpRequestHandlerChain.java
│   │       │   │                   │   ├── InferenceRequestHandler.java
│   │       │   │                   │   ├── InternalServerException.java
│   │       │   │                   │   ├── InvalidPluginException.java
│   │       │   │                   │   ├── InvalidRequestHandler.java
│   │       │   │                   │   ├── ListModelsResponse.java
│   │       │   │                   │   ├── ManagementRequestHandler.java
│   │       │   │                   │   ├── MethodNotAllowedException.java
│   │       │   │                   │   ├── RequestTimeoutException.java
│   │       │   │                   │   ├── ResourceNotFoundException.java
│   │       │   │                   │   ├── ServiceUnavailableException.java
│   │       │   │                   │   ├── Session.java
│   │       │   │                   │   ├── StatusResponse.java
│   │       │   │                   │   └── messages/
│   │       │   │                   │       └── RegisterModelRequest.java
│   │       │   │                   ├── metrics/
│   │       │   │                   │   ├── Dimension.java
│   │       │   │                   │   ├── Metric.java
│   │       │   │                   │   ├── MetricCollector.java
│   │       │   │                   │   └── MetricManager.java
│   │       │   │                   ├── openapi/
│   │       │   │                   │   ├── Encoding.java
│   │       │   │                   │   ├── Info.java
│   │       │   │                   │   ├── MediaType.java
│   │       │   │                   │   ├── OpenApi.java
│   │       │   │                   │   ├── OpenApiUtils.java
│   │       │   │                   │   ├── Operation.java
│   │       │   │                   │   ├── Parameter.java
│   │       │   │                   │   ├── Path.java
│   │       │   │                   │   ├── PathParameter.java
│   │       │   │                   │   ├── QueryParameter.java
│   │       │   │                   │   ├── RequestBody.java
│   │       │   │                   │   ├── Response.java
│   │       │   │                   │   └── Schema.java
│   │       │   │                   ├── servingsdk/
│   │       │   │                   │   └── impl/
│   │       │   │                   │       ├── ModelServerContext.java
│   │       │   │                   │       ├── ModelServerModel.java
│   │       │   │                   │       ├── ModelServerRequest.java
│   │       │   │                   │       ├── ModelServerResponse.java
│   │       │   │                   │       ├── ModelWorker.java
│   │       │   │                   │       └── PluginsManager.java
│   │       │   │                   ├── util/
│   │       │   │                   │   ├── ConfigManager.java
│   │       │   │                   │   ├── Connector.java
│   │       │   │                   │   ├── ConnectorType.java
│   │       │   │                   │   ├── JsonUtils.java
│   │       │   │                   │   ├── NettyUtils.java
│   │       │   │                   │   ├── OpenSslKey.java
│   │       │   │                   │   ├── ServerGroups.java
│   │       │   │                   │   ├── codec/
│   │       │   │                   │   │   ├── CodecUtils.java
│   │       │   │                   │   │   ├── ModelRequestEncoder.java
│   │       │   │                   │   │   └── ModelResponseDecoder.java
│   │       │   │                   │   ├── logging/
│   │       │   │                   │   │   └── QLogLayout.java
│   │       │   │                   │   └── messages/
│   │       │   │                   │       ├── BaseModelRequest.java
│   │       │   │                   │       ├── InputParameter.java
│   │       │   │                   │       ├── ModelInferenceRequest.java
│   │       │   │                   │       ├── ModelLoadModelRequest.java
│   │       │   │                   │       ├── ModelWorkerResponse.java
│   │       │   │                   │       ├── Predictions.java
│   │       │   │                   │       ├── RequestInput.java
│   │       │   │                   │       └── WorkerCommands.java
│   │       │   │                   └── wlm/
│   │       │   │                       ├── BatchAggregator.java
│   │       │   │                       ├── Job.java
│   │       │   │                       ├── Model.java
│   │       │   │                       ├── ModelManager.java
│   │       │   │                       ├── WorkLoadManager.java
│   │       │   │                       ├── WorkerInitializationException.java
│   │       │   │                       ├── WorkerLifeCycle.java
│   │       │   │                       ├── WorkerState.java
│   │       │   │                       ├── WorkerStateListener.java
│   │       │   │                       └── WorkerThread.java
│   │       │   └── resources/
│   │       │       └── log4j2.xml
│   │       └── test/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   ├── CoverageTest.java
│   │           │                   ├── ModelServerTest.java
│   │           │                   ├── TestUtils.java
│   │           │                   ├── test/
│   │           │                   │   └── TestHelper.java
│   │           │                   └── util/
│   │           │                       └── ConfigManagerTest.java
│   │           └── resources/
│   │               ├── certs.pem
│   │               ├── config.properties
│   │               ├── config_test_env.properties
│   │               ├── describe_api.json
│   │               ├── inference_open_api.json
│   │               ├── key.pem
│   │               ├── keystore.p12
│   │               └── management_open_api.json
│   ├── settings.gradle
│   └── tools/
│       ├── conf/
│       │   ├── checkstyle.xml
│       │   ├── findbugs-exclude.xml
│       │   ├── pmd.xml
│       │   └── suppressions.xml
│       └── gradle/
│           ├── check.gradle
│           ├── formatter.gradle
│           └── launcher.gradle
├── mms/
│   ├── .gitignore
│   ├── __init__.py
│   ├── arg_parser.py
│   ├── configs/
│   │   └── sagemaker_config.properties
│   ├── context.py
│   ├── export_model.py
│   ├── metrics/
│   │   ├── __init__.py
│   │   ├── dimension.py
│   │   ├── metric.py
│   │   ├── metric_collector.py
│   │   ├── metric_encoder.py
│   │   ├── metrics_store.py
│   │   ├── process_memory_metric.py
│   │   ├── system_metrics.py
│   │   └── unit.py
│   ├── model_loader.py
│   ├── model_server.py
│   ├── model_service/
│   │   ├── __init__.py
│   │   ├── gluon_vision_service.py
│   │   ├── model_service.py
│   │   ├── mxnet_model_service.py
│   │   └── mxnet_vision_service.py
│   ├── model_service_worker.py
│   ├── protocol/
│   │   ├── __init__.py
│   │   └── otf_message_handler.py
│   ├── service.py
│   ├── tests/
│   │   ├── README.md
│   │   ├── pylintrc
│   │   └── unit_tests/
│   │       ├── helper/
│   │       │   ├── __init__.py
│   │       │   └── pixel2pixel_service.py
│   │       ├── model_service/
│   │       │   ├── dummy_model/
│   │       │   │   ├── MANIFEST.json
│   │       │   │   └── dummy_model_service.py
│   │       │   ├── test_mxnet_image.py
│   │       │   ├── test_mxnet_ndarray.py
│   │       │   ├── test_mxnet_nlp.py
│   │       │   └── test_service.py
│   │       ├── test_beckend_metric.py
│   │       ├── test_model_loader.py
│   │       ├── test_model_service_worker.py
│   │       ├── test_otf_codec_protocol.py
│   │       ├── test_utils/
│   │       │   ├── MAR-INF/
│   │       │   │   └── MANIFEST.json
│   │       │   ├── dummy_class_model_service.py
│   │       │   └── dummy_func_model_service.py
│   │       ├── test_version.py
│   │       └── test_worker_service.py
│   ├── utils/
│   │   ├── __init__.py
│   │   ├── mxnet/
│   │   │   ├── __init__.py
│   │   │   ├── image.py
│   │   │   ├── ndarray.py
│   │   │   └── nlp.py
│   │   └── timeit_decorator.py
│   └── version.py
├── model-archiver/
│   ├── .coveragerc
│   ├── LICENSE
│   ├── MANIFEST.in
│   ├── PyPiDescription.rst
│   ├── README.md
│   ├── docs/
│   │   └── convert_from_onnx.md
│   ├── model_archiver/
│   │   ├── __init__.py
│   │   ├── arg_parser.py
│   │   ├── manifest_components/
│   │   │   ├── __init__.py
│   │   │   ├── engine.py
│   │   │   ├── manifest.py
│   │   │   ├── model.py
│   │   │   └── publisher.py
│   │   ├── model_archiver_error.py
│   │   ├── model_packaging.py
│   │   ├── model_packaging_utils.py
│   │   ├── tests/
│   │   │   ├── integ_tests/
│   │   │   │   ├── configuration.json
│   │   │   │   ├── resources/
│   │   │   │   │   ├── onnx_model/
│   │   │   │   │   │   ├── model.onnx
│   │   │   │   │   │   └── service.py
│   │   │   │   │   └── regular_model/
│   │   │   │   │       ├── dir/
│   │   │   │   │       │   └── 1.py
│   │   │   │   │       ├── dummy-artifacts.txt
│   │   │   │   │       └── service.py
│   │   │   │   └── test_integration_model_archiver.py
│   │   │   ├── pylintrc
│   │   │   └── unit_tests/
│   │   │       ├── test_model_packaging.py
│   │   │       ├── test_model_packaging_utils.py
│   │   │       └── test_version.py
│   │   └── version.py
│   └── setup.py
├── performance_regression/
│   └── imageInputModelPlan.jmx.yaml
├── plugins/
│   ├── build.gradle
│   ├── endpoints/
│   │   ├── build.gradle
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── software/
│   │           │       └── amazon/
│   │           │           └── ai/
│   │           │               └── mms/
│   │           │                   └── plugins/
│   │           │                       └── endpoint/
│   │           │                           ├── ExecutionParameters.java
│   │           │                           └── Ping.java
│   │           └── resources/
│   │               └── META-INF/
│   │                   └── services/
│   │                       └── software.amazon.ai.mms.servingsdk.ModelServerEndpoint
│   ├── gradle/
│   │   └── wrapper/
│   │       ├── gradle-wrapper.jar
│   │       └── gradle-wrapper.properties
│   ├── gradle.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── settings.gradle
│   └── tools/
│       ├── conf/
│       │   ├── checkstyle.xml
│       │   ├── findbugs-exclude.xml
│       │   ├── pmd.xml
│       │   └── suppressions.xml
│       └── gradle/
│           ├── check.gradle
│           ├── formatter.gradle
│           └── launcher.gradle
├── run_ci_tests.sh
├── run_circleci_tests.py
├── serving-sdk/
│   ├── checkstyle.xml
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   └── java/
│       │       └── software/
│       │           └── amazon/
│       │               └── ai/
│       │                   └── mms/
│       │                       └── servingsdk/
│       │                           ├── Context.java
│       │                           ├── Model.java
│       │                           ├── ModelServerEndpoint.java
│       │                           ├── ModelServerEndpointException.java
│       │                           ├── Worker.java
│       │                           ├── annotations/
│       │                           │   ├── Endpoint.java
│       │                           │   └── helpers/
│       │                           │       └── EndpointTypes.java
│       │                           └── http/
│       │                               ├── Request.java
│       │                               └── Response.java
│       └── test/
│           └── java/
│               └── software/
│                   └── amazon/
│                       └── ai/
│                           └── mms/
│                               └── servingsdk/
│                                   └── ModelServerEndpointTest.java
├── setup.py
├── test/
│   ├── README.md
│   ├── postman/
│   │   ├── environment.json
│   │   ├── https_test_collection.json
│   │   ├── inference_api_test_collection.json
│   │   ├── inference_data.json
│   │   └── management_api_test_collection.json
│   ├── regression_tests.sh
│   └── resources/
│       ├── certs.pem
│       ├── config.properties
│       └── key.pem
└── tests/
    └── performance/
        ├── README.md
        ├── TESTS.md
        ├── agents/
        │   ├── __init__.py
        │   ├── config.ini
        │   ├── configuration.py
        │   ├── metrics/
        │   │   └── __init__.py
        │   ├── metrics_collector.py
        │   ├── metrics_monitoring_inproc.py
        │   ├── metrics_monitoring_server.py
        │   └── utils/
        │       ├── __init__.py
        │       └── process.py
        ├── pylintrc
        ├── requirements.txt
        ├── run_performance_suite.py
        ├── runs/
        │   ├── __init__.py
        │   ├── compare.py
        │   ├── context.py
        │   ├── junit.py
        │   ├── storage.py
        │   └── taurus/
        │       ├── __init__.py
        │       ├── reader.py
        │       └── x2junit.py
        ├── tests/
        │   ├── api_description/
        │   │   ├── api_description.jmx
        │   │   ├── api_description.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── batch_and_single_inference/
        │   │   ├── batch_and_single_inference.jmx
        │   │   ├── batch_and_single_inference.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── batch_inference/
        │   │   ├── batch_inference.jmx
        │   │   ├── batch_inference.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── examples_local_criteria/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── examples_local_criteria.jmx
        │   │   └── examples_local_criteria.yaml
        │   ├── examples_local_monitoring/
        │   │   ├── examples_local_monitoring.jmx
        │   │   └── examples_local_monitoring.yaml
        │   ├── examples_remote_criteria/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── examples_remote_criteria.jmx
        │   │   └── examples_remote_criteria.yaml
        │   ├── examples_remote_monitoring/
        │   │   ├── examples_remote_monitoring.jmx
        │   │   └── examples_remote_monitoring.yaml
        │   ├── examples_starter/
        │   │   ├── examples_starter.jmx
        │   │   └── examples_starter.yaml
        │   ├── global_config.yaml
        │   ├── health_check/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── health_check.jmx
        │   │   └── health_check.yaml
        │   ├── inference_multiple_models/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_multiple_models.jmx
        │   │   └── inference_multiple_models.yaml
        │   ├── inference_multiple_worker/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_multiple_worker.jmx
        │   │   └── inference_multiple_worker.yaml
        │   ├── inference_single_worker/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_single_worker.jmx
        │   │   └── inference_single_worker.yaml
        │   ├── list_models/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── list_models.jmx
        │   │   └── list_models.yaml
        │   ├── model_description/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── model_description.jmx
        │   │   └── model_description.yaml
        │   ├── multiple_inference_and_scaling/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── multiple_inference_and_scaling.jmx
        │   │   └── multiple_inference_and_scaling.yaml
        │   ├── register_unregister/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── register_unregister.jmx
        │   │   └── register_unregister.yaml
        │   ├── register_unregister_multiple/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── register_unregister_multiple.jmx
        │   │   └── register_unregister_multiple.yaml
        │   ├── scale_down_workers/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── scale_down_workers.jmx
        │   │   └── scale_down_workers.yaml
        │   └── scale_up_workers/
        │       ├── environments/
        │       │   └── xlarge.yaml
        │       ├── scale_up_workers.jmx
        │       └── scale_up_workers.yaml
        └── utils/
            ├── __init__.py
            ├── fs.py
            ├── pyshell.py
            └── timer.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .circleci/README.md
================================================
# Multi Model Server CircleCI build
Model Server uses CircleCI for builds. This folder contains the config and scripts that are needed for CircleCI.

## config.yml
_config.yml_ contains MMS build logic which will be used by CircleCI.

## Workflows and Jobs
Currently, following _workflows_ are available -
1. smoke
2. nightly
3. weekly

Following _jobs_ are executed under each workflow -
1. **build** : Builds _frontend/model-server.jar_ and executes tests from gradle
2. **modelarchiver** : Builds and tests modelarchiver module
3. **python-tests** : Executes pytests from _mms/tests/unit_tests/_
4. **benchmark** : Executes latency benchmark using resnet-18 model
5. (NEW!) **api-tests** : Executes newman test suite for API testing

Following _executors_ are available for job execution -
1. py27
2. py36

> Please check the _workflows_, _jobs_ and _executors_ section in _config.yml_ for an up to date list

## scripts
Instead of using inline commands inside _config.yml_, job steps are configured as shell scripts.  
This is easier for maintenance and reduces chances of error in config.yml

## images
MMS uses customized docker image for its CircleCI build.  
To make sure MMS is compatible with both Python2 and Python3, we use two build projects.  
We have published two docker images on docker hub for code build
* prashantsail/mms-build:python2.7
* prashantsail/mms-build:python3.6

Following files in the _images_ folder are used to create the docker images
* Dockerfile.python2.7 - Dockerfile for prashantsail/mms-build:python2.7
* Dockerfile.python3.6 - Dockerfile for prashantsail/mms-build:python3.6

## Local CircleCI cli
To make it easy for developers to debug build issues locally, MMS supports CircleCI cli for running a job in a container on your machine.

#### Dependencies
1. CircleCI cli ([Quick Install](https://circleci.com/docs/2.0/local-cli/#quick-installation))
2. PyYAML (pip install PyYaml)
3. docker (installed and running)

#### Command
Developers can use the following command to build MMS locally:  
**./run_circleci_tests.py <workflow_name> -j <job_name> -e <executor_name>**

- _workflow_name_  
This is a madatory parameter

- _-j, --job job_name_  
If specified, executes only the specified job name (along with the required parents).  
If not specified, all jobs in the workflow are executed sequentially.  

- _-e, --executor executor_name_  
If specified, job is executed only on the specified executor(docker image).  
If not specified, job is executed on all the available executors.  

```bash
$ cd multi-model-server
$ ./run_circleci_tests.py smoke
$ ./run_circleci_tests.py smoke -j modelarchiver
$ ./run_circleci_tests.py smoke -e py36
$ ./run_circleci_tests.py smoke -j modelarchiver -e py36
```

###### Checklist
> 1. Make sure docker is running before you start local execution.  
> 2. Docker containers to have **at least 4GB RAM, 2 CPU**.  
> 3. If you are on a network with low bandwidth, we advise you to explicitly pull the docker images -  
> docker pull prashantsail/mms-build:python2.7  
> docker pull prashantsail/mms-build:python3.6  

`To avoid Pull Request build failures on github, developers should always make sure that their local builds pass.`


================================================
FILE: .circleci/config.yml
================================================
version: 2.1


executors:
  py36:
    docker:
      - image: prashantsail/mms-build:python3.6
    environment:
      _JAVA_OPTIONS: "-Xmx2048m"

  py27:
    docker:
      - image: prashantsail/mms-build:python2.7
    environment:
      _JAVA_OPTIONS: "-Xmx2048m"


commands:
  attach-mms-workspace:
    description: "Attach the MMS directory which was saved into workspace"
    steps:
      - attach_workspace:
          at: .

  install-mms-server:
    description: "Install MMS server from a wheel"
    steps:
      - run:
          name: Install MMS
          command: pip install dist/*.whl

  exeucute-api-tests:
    description: "Execute API tests from a collection"
    parameters:
      collection:
        type: enum
        enum: [management, inference, https]
        default: management
    steps:
      - run:
          name: Start MMS, Execute << parameters.collection >> API Tests, Stop MMS
          command: .circleci/scripts/linux_test_api.sh << parameters.collection >>
      - store_artifacts:
          name: Store server logs from << parameters.collection >> API tests
          path: mms_<< parameters.collection >>.log
      - store_artifacts:
          name: Store << parameters.collection >> API test results
          path: test/<< parameters.collection >>-api-report.html


jobs:
    build:
      parameters:
        executor:
          type: executor
      executor: << parameters.executor >>
      steps:
        - checkout
        - run:
            name: Build frontend
            command: .circleci/scripts/linux_build.sh
        - store_artifacts:
            name: Store gradle testng results
            path: frontend/server/build/reports/tests/test
        - persist_to_workspace:
            root: .
            paths:
              - .

    python-tests:
      parameters:
        executor:
          type: executor
      executor: << parameters.executor >>
      steps:
        - attach-mms-workspace
        - run:
            name: Execute python unit tests
            command: .circleci/scripts/linux_test_python.sh
        - store_artifacts:
            name: Store python Test results
            path: htmlcov

    api-tests:
      parameters:
        executor:
          type: executor
      executor: << parameters.executor >>
      steps:
        - attach-mms-workspace
        - install-mms-server
        - exeucute-api-tests:
            collection: management
        - exeucute-api-tests:
            collection: inference
        - exeucute-api-tests:
            collection: https

    benchmark:
      parameters:
        executor:
          type: executor
      executor: << parameters.executor >>
      steps:
        - attach-mms-workspace
        - install-mms-server
        - run:
            name: Start MMS, Execute benchmark tests, Stop MMS
            command: .circleci/scripts/linux_test_benchmark.sh
        - store_artifacts:
            name: Store server logs from benchmark tests
            path: mms.log
        - store_artifacts:
            name: Store Benchmark Latency resnet-18 results
            path: /tmp/MMSBenchmark/out/latency/resnet-18/report/
            destination: benchmark-latency-resnet-18

    modelarchiver:
      parameters:
        executor:
          type: executor
      executor: << parameters.executor >>
      steps:
        - checkout
        - run:
            name: Execute lint, unit and integration tests
            command: .circleci/scripts/linux_test_modelarchiver.sh
        - store_artifacts:
            name: Store unit tests results from model archiver tests
            path: model-archiver/results_units
            destination: units


workflows:
  version: 2

  smoke:
    jobs:
      - &build
        build:
          name: build-<< matrix.executor >>
          matrix: &matrix
            parameters:
              executor: ["py27", "py36"]
      - &modelarchiver
        modelarchiver:
          name: modelarchiver-<< matrix.executor >>
          matrix: *matrix
      - &python-tests
        python-tests:
          name: python-tests-<< matrix.executor >>
          requires:
            - build-<< matrix.executor >>
          matrix: *matrix

  nightly:
    triggers:
      - schedule:
          cron: "0 0 * * *"
          filters:
            branches:
              only:
                - master
    jobs:
      - *build
      - *modelarchiver
      - *python-tests
      - &api-tests
        api-tests:
          name: api-tests-<< matrix.executor >>
          requires:
            - build-<< matrix.executor >>
          matrix: *matrix

  weekly:
    triggers:
      - schedule:
          cron: "0 0 * * 0"
          filters:
            branches:
              only:
                - master
    jobs:
      - *build
      - benchmark:
          name: benchmark-<< matrix.executor >>
          requires:
            - build-<< matrix.executor >>
          matrix: *matrix


================================================
FILE: .circleci/images/Dockerfile.python2.7
================================================
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
#

FROM awsdeeplearningteam/mms-build:python2.7@sha256:2b743d6724dead806873cce1330f7b8a0197399a35af47dfd7035251fdade122

# 2020 - Updated Build and Test dependencies

# Python packages for MMS Server
RUN pip install psutil \
    && pip install future \
    && pip install Pillow \
    && pip install wheel \
    && pip install twine \
    && pip install requests \
    && pip install mock \
    && pip install numpy \
    && pip install Image \
    && pip install mxnet==1.5.0 \
    && pip install enum34

# Python packages for pytests
RUN pip install pytest==4.0.0 \
    && pip install pytest-cov \
    && pip install pytest-mock

# Python packages for benchmark
RUN pip install pandas

# Install NodeJS and packages for API tests
RUN curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - \
    && sudo apt-get install -y nodejs \
    && sudo npm install -g newman newman-reporter-html

# Install jmeter for benchmark
# ToDo: Remove --no-check-certificate; temporarily added to bypass jmeter-plugins.org's expired certificate
RUN cd /opt \
    && wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz \
    && tar -xzf apache-jmeter-5.3.tgz \
    && cd apache-jmeter-5.3 \
    && ln -s /opt/apache-jmeter-5.3/bin/jmeter /usr/local/bin/jmeter \
    && wget --no-check-certificate https://jmeter-plugins.org/get/ -O lib/ext/jmeter-plugins-manager-1.4.jar \
    && wget http://search.maven.org/remotecontent?filepath=kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -O lib/cmdrunner-2.2.jar \
    && java -cp lib/ext/jmeter-plugins-manager-1.4.jar org.jmeterplugins.repository.PluginManagerCMDInstaller \
    && bin/PluginsManagerCMD.sh install jpgc-synthesis=2.1,jpgc-filterresults=2.1,jpgc-mergeresults=2.1,jpgc-cmd=2.1,jpgc-perfmon=2.1

# bzt is used for performance regression test suite
# bzt requires python 3.6 runtime.
# Download pyenv, use pyenv to download python 3.6.5.
# The downloaded python 3.6.5 is isolated and doesn't interfere with default python(2.7)
# Only before starting the performance regression suite, py 3.6.5 is local installed(pyenv local 3.6.5) in test dir
# !! MMS server will continue using Python 2.7 !!
RUN curl https://pyenv.run | bash \
    && $HOME/.pyenv/bin/pyenv install 3.6.5

================================================
FILE: .circleci/images/Dockerfile.python3.6
================================================
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
#

FROM awsdeeplearningteam/mms-build:python3.6@sha256:2c1afa8834907ceec641d254dffbf4bcc659ca2d00fd6f2872d7521f32c9fa2e

# 2020 - Updated Build and Test dependencies

# Python packages for MMS Server
RUN pip install psutil \
    && pip install future \
    && pip install Pillow \
    && pip install wheel \
    && pip install twine \
    && pip install requests \
    && pip install mock \
    && pip install numpy \
    && pip install Image \
    && pip install mxnet==1.5.0

# Python packages for pytests
RUN pip install pytest==4.0.0 \
    && pip install pytest-cov \
    && pip install pytest-mock

# Python packages for benchmark
RUN pip install pandas

# Install NodeJS and packages for API tests
RUN curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - \
    && sudo apt-get install -y nodejs \
    && sudo npm install -g newman newman-reporter-html

# Install jmeter for benchmark
# ToDo: Remove --no-check-certificate; temporarily added to bypass jmeter-plugins.org's expired certificate
RUN cd /opt \
    && wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz \
    && tar -xzf apache-jmeter-5.3.tgz \
    && cd apache-jmeter-5.3 \
    && ln -s /opt/apache-jmeter-5.3/bin/jmeter /usr/local/bin/jmeter \
    && wget --no-check-certificate https://jmeter-plugins.org/get/ -O lib/ext/jmeter-plugins-manager-1.4.jar \
    && wget http://search.maven.org/remotecontent?filepath=kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -O lib/cmdrunner-2.2.jar \
    && java -cp lib/ext/jmeter-plugins-manager-1.4.jar org.jmeterplugins.repository.PluginManagerCMDInstaller \
    && bin/PluginsManagerCMD.sh install jpgc-synthesis=2.1,jpgc-filterresults=2.1,jpgc-mergeresults=2.1,jpgc-cmd=2.1,jpgc-perfmon=2.1

================================================
FILE: .circleci/scripts/linux_build.sh
================================================
#!/bin/bash

python setup.py bdist_wheel --universal

================================================
FILE: .circleci/scripts/linux_test_api.sh
================================================
#!/bin/bash

MODEL_STORE_DIR='test/model_store'

MMS_LOG_FILE_MANAGEMENT='mms_management.log'
MMS_LOG_FILE_INFERENCE='mms_inference.log'
MMS_LOG_FILE_HTTPS='mms_https.log'
MMS_CONFIG_FILE_HTTPS='test/resources/config.properties'

POSTMAN_ENV_FILE='test/postman/environment.json'
POSTMAN_COLLECTION_MANAGEMENT='test/postman/management_api_test_collection.json'
POSTMAN_COLLECTION_INFERENCE='test/postman/inference_api_test_collection.json'
POSTMAN_COLLECTION_HTTPS='test/postman/https_test_collection.json'
POSTMAN_DATA_FILE_INFERENCE='test/postman/inference_data.json'

REPORT_FILE_MANAGEMENT='test/management-api-report.html'
REPORT_FILE_INFERENCE='test/inference-api-report.html'
REPORT_FILE_HTTPS='test/https-api-report.html'

start_mms_server() {
  multi-model-server --start --model-store $1 >> $2 2>&1
  sleep 10
}

start_mms_secure_server() {
  multi-model-server --start --mms-config $MMS_CONFIG_FILE_HTTPS --model-store $1 >> $2 2>&1
  sleep 10
}

stop_mms_server() {
  multi-model-server --stop
}

trigger_management_tests(){
  start_mms_server $MODEL_STORE_DIR $MMS_LOG_FILE_MANAGEMENT
  newman run -e $POSTMAN_ENV_FILE $POSTMAN_COLLECTION_MANAGEMENT \
             -r cli,html --reporter-html-export $REPORT_FILE_MANAGEMENT --verbose
  stop_mms_server
}

trigger_inference_tests(){
  start_mms_server $MODEL_STORE_DIR $MMS_LOG_FILE_INFERENCE
  newman run -e $POSTMAN_ENV_FILE $POSTMAN_COLLECTION_INFERENCE -d $POSTMAN_DATA_FILE_INFERENCE \
             -r cli,html --reporter-html-export $REPORT_FILE_INFERENCE --verbose
  stop_mms_server
}

trigger_https_tests(){
  start_mms_secure_server $MODEL_STORE_DIR $MMS_LOG_FILE_HTTPS
  newman run --insecure -e $POSTMAN_ENV_FILE $POSTMAN_COLLECTION_HTTPS \
             -r cli,html --reporter-html-export $REPORT_FILE_HTTPS --verbose
  stop_mms_server
}

mkdir -p $MODEL_STORE_DIR

case $1 in
   'management')
      trigger_management_tests
      ;;
   'inference')
      trigger_inference_tests
      ;;
   'https')
      trigger_https_tests
      ;;
   'ALL')
      trigger_management_tests
      trigger_inference_tests
      trigger_https_tests
      ;;
   *)
     echo $1 'Invalid'
     echo 'Please specify any one of - management | inference | https | ALL'
     exit 1
     ;;
esac

================================================
FILE: .circleci/scripts/linux_test_benchmark.sh
================================================
#!/bin/bash

# Hack needed to make it work with existing benchmark.py
# benchmark.py expects jmeter to be present at a very specific location
mkdir -p /home/ubuntu/.linuxbrew/Cellar/jmeter/5.3/libexec/bin/
ln -s /opt/apache-jmeter-5.3/bin/jmeter /home/ubuntu/.linuxbrew/Cellar/jmeter/5.3/libexec/bin/jmeter

multi-model-server --start >> mms.log 2>&1
sleep 30

cd benchmarks
python benchmark.py latency

multi-model-server --stop

================================================
FILE: .circleci/scripts/linux_test_modelarchiver.sh
================================================
#!/bin/bash

cd model-archiver/

# Lint test
pylint -rn --rcfile=./model_archiver/tests/pylintrc model_archiver/.

# Execute python unit tests
python -m pytest --cov-report html:results_units --cov=./ model_archiver/tests/unit_tests


# Install model archiver module
pip install .

# Execute integration tests
python -m pytest model_archiver/tests/integ_tests
# ToDo - Report for Integration tests ?

================================================
FILE: .circleci/scripts/linux_test_perf_regression.sh
================================================
#!/bin/bash

multi-model-server --start \
                   --models squeezenet=https://s3.amazonaws.com/model-server/model_archive_1.0/squeezenet_v1.1.mar \
                   >> mms.log 2>&1
sleep 90

cd performance_regression

# Only on a python 2 environment -
PY_MAJOR_VER=$(python -c 'import sys; major = sys.version_info.major; print(major);')
if [ $PY_MAJOR_VER -eq 2 ]; then
  # Hack to use python 3.6.5 for bzt installation and execution
  export PATH="/root/.pyenv/bin:/root/.pyenv/shims:$PATH"
  pyenv local 3.6.5
fi

# Install dependencies
pip install bzt

curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
bzt -o modules.jmeter.path=/opt/apache-jmeter-5.3/bin/jmeter \
    -o settings.artifacts-dir=/tmp/mms-performance-regression/ \
    -o modules.console.disable=true \
    imageInputModelPlan.jmx.yaml \
    -report

multi-model-server --stop

================================================
FILE: .circleci/scripts/linux_test_python.sh
================================================
#!/bin/bash

# Lint Test
pylint -rn --rcfile=./mms/tests/pylintrc mms/.

# Execute python tests
python -m pytest --cov-report html:htmlcov --cov=mms/ mms/tests/unit_tests/

================================================
FILE: .coveragerc
================================================
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

[report]
exclude_lines =
    pragma: no cover
    if __name__ == .__main__.:
    if __name__ == "__main__" :

[run]
branch = True
omit = */__init__.py
       mms/tests/*
       mms/utils/model_server_error_codes.py
       mms/utils/timeit_decorator.py
       mms/storage.py
       mms/metrics/system_metrics.py
       mms/utils/mxnet/*
       mms/examples/metric_push_example.py
       mms/model_service/*


================================================
FILE: .github/PULL_REQUEST_TEMPLATE.md
================================================
Before or while filing an issue please feel free to join our [<img src='../docs/images/slack.png' width='20px' /> slack channel](https://join.slack.com/t/mms-awslabs/shared_invite/enQtNDk4MTgzNDc5NzE4LTBkYTAwMjBjMTVmZTdkODRmYTZkNjdjZGYxZDI0ODhiZDdlM2Y0ZGJiZTczMGY3Njc4MmM3OTQ0OWI2ZDMyNGQ) to get in touch with development team, ask questions, find out what's cooking and more!

## Issue #, if available:

## Description of changes:

## Testing done:

**To run CI tests on your changes refer [README.md](https://github.com/awslabs/multi-model-server/blob/master/ci/README.md)**

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.


================================================
FILE: .gitignore
================================================
.gradle

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv
venv/
ENV/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/

# mac
.DS_Store

# PyCharm
.idea/

# Log
*.log.*

# Model
*.model

# Pictures
*.jpg

# Prop file in benchmark
benchmarks/*.properties

# intellij files
*.iml

# MMS files
mms/frontend
mms/plugins

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: LICENSE.txt
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
include mms/frontend/model-server.jar
include PyPiDescription.rst
include mms/configs/*


================================================
FILE: PyPiDescription.rst
================================================
Project Description
===================

Multi Model Server (MMS) is a flexible and easy to use tool for
serving deep learning models exported from `MXNet <http://mxnet.io/>`__
or the Open Neural Network Exchange (`ONNX <http://onnx.ai/>`__).

Use the MMS Server CLI, or the pre-configured Docker images, to start a
service that sets up HTTP endpoints to handle model inference requests.

Detailed documentation and examples are provided in the `docs
folder <https://github.com/awslabs/multi-model-server/blob/master/docs/README.md>`__.

Prerequisites
-------------

* **java 8**: Required. MMS use java to serve HTTP requests. You must install java 8 (or later) and make sure java is on available in $PATH environment variable *before* installing MMS. If you have multiple java installed, you can use $JAVA_HOME environment vairable to control which java to use.
* **mxnet**: `mxnet` will not be installed by default with MMS 1.0 any more. You have to install it manually if you use MxNet.

For ubuntu:
::

    sudo apt-get install openjdk-8-jre-headless


For centos
::

    sudo yum install java-1.8.0-openjdk


For Mac:
::

    brew tap caskroom/versions
    brew update
    brew cask install java8


Install MxNet:
::

    pip install mxnet

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.
To install mkl package for CPU:
::

    pip install mxnet-mkl

or for GPU instance:

::

    pip install mxnet-cu92mkl


Installation
------------

::

    pip install multi-model-server

Development
-----------

We welcome new contributors of all experience levels. For information on
how to install MMS for development, refer to the `MMS
docs <https://github.com/awslabs/multi-model-server/blob/master/docs/install.md>`__.

Important links
---------------

-  `Official source code
   repo <https://github.com/awslabs/multi-model-server>`__
-  `Download
   releases <https://pypi.org/project/multi-model-server/#files>`__
-  `Issue
   tracker <https://github.com/awslabs/multi-model-server/issues>`__

Source code
-----------

You can check the latest source code as follows:

::

    git clone https://github.com/awslabs/multi-model-server.git

Testing
-------

After installation, try out the MMS Quickstart for

- `Serving a Model <https://github.com/awslabs/multi-model-server/blob/master/README.md#serve-a-model>`__
- `Create a Model Archive <https://github.com/awslabs/multi-model-server/blob/master/README.md#model-archive>`__.

Help and Support
----------------

-  `Documentation <https://github.com/awslabs/multi-model-server/blob/master/docs/README.md>`__
-  `Forum <https://discuss.mxnet.io/latest>`__

Citation
--------

If you use MMS in a publication or project, please cite MMS:
https://github.com/awslabs/multi-model-server


================================================
FILE: README.md
================================================
Multi Model Server
=======

| ubuntu/python-2.7 | ubuntu/python-3.6 |
|---------|---------|
| ![Python3 Build Status](https://codebuild.us-east-1.amazonaws.com/badges?uuid=eyJlbmNyeXB0ZWREYXRhIjoicGZ6dXFmMU54UGxDaGsxUDhXclJLcFpHTnFMNld6cW5POVpNclc4Vm9BUWJNamZKMGdzbk1lOU92Z0VWQVZJTThsRUttOW8rUzgxZ2F0Ull1U1VkSHo0PSIsIml2UGFyYW1ldGVyU3BlYyI6IkJJaFc1QTEwRGhwUXY1dDgiLCJtYXRlcmlhbFNldFNlcmlhbCI6MX0%3D&branch=master) | ![Python2 Build Status](https://codebuild.us-east-1.amazonaws.com/badges?uuid=eyJlbmNyeXB0ZWREYXRhIjoiYVdIajEwVW9uZ3cvWkZqaHlaRGNUU2M0clE2aUVjelJranJoYTI3S1lHT3R5THJXdklzejU2UVM5NWlUTWdwaVVJalRwYi9GTnJ1aUxiRXIvTGhuQ2g0PSIsIml2UGFyYW1ldGVyU3BlYyI6IjArcHVCaFgvR1pTN1JoSG4iLCJtYXRlcmlhbFNldFNlcmlhbCI6MX0%3D&branch=master) |

Multi Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework.

Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

A quick overview and examples for both serving and packaging are provided below. Detailed documentation and examples are provided in the [docs folder](docs/README.md).

Join our [<img src='docs/images/slack.png' width='20px' /> slack channel](https://join.slack.com/t/mms-awslabs/shared_invite/zt-6cv1kx46-MBTOPLNDwmyBynEvFBsNkQ) to get in touch with development team, ask questions, find out what's cooking and more!

## Contents of this Document
* [Quick Start](#quick-start)
* [Serve a Model](#serve-a-model)
* [Other Features](#other-features)
* [External demos powered by MMS](#external-demos-powered-by-mms)
* [Contributing](#contributing)


## Other Relevant Documents
* [Latest Version Docs](docs/README.md)
* [v0.4.0 Docs](https://github.com/awslabs/multi-model-server/blob/v0.4.0/docs/README.md)
* [Migrating from v0.4.0 to v1.0.0](docs/migration.md)

## Quick Start
### Prerequisites
Before proceeding further with this document, make sure you have the following prerequisites.
1. Ubuntu, CentOS, or macOS. Windows support is experimental. The following instructions will focus on Linux and macOS only.
1. Python     - Multi Model Server requires python to run the workers.
1. pip        - Pip is a python package management system.
1. Java 8     - Multi Model Server requires Java 8 to start. You have the following options for installing Java 8:

    For Ubuntu:
    ```bash
    sudo apt-get install openjdk-8-jre-headless
    ```

    For CentOS:
    ```bash
    sudo yum install java-1.8.0-openjdk
    ```

    For macOS:
    ```bash
    brew tap homebrew/cask-versions
    brew update
    brew cask install adoptopenjdk8
    ```

### Installing Multi Model Server with pip

#### Setup

**Step 1:** Setup a Virtual Environment

We recommend installing and running Multi Model Server in a virtual environment. It's a good practice to run and install all of the Python dependencies in virtual environments. This will provide isolation of the dependencies and ease dependency management.

One option is to use Virtualenv. This is used to create virtual Python environments. You may install and activate a virtualenv for Python 2.7 as follows:

```bash
pip install virtualenv
```

Then create a virtual environment:
```bash
# Assuming we want to run python2.7 in /usr/local/bin/python2.7
virtualenv -p /usr/local/bin/python2.7 /tmp/pyenv2
# Enter this virtual environment as follows
source /tmp/pyenv2/bin/activate
```

Refer to the [Virtualenv documentation](https://virtualenv.pypa.io/en/stable/) for further information.

**Step 2:** Install MXNet
MMS won't install the MXNet engine by default. If it isn't already installed in your virtual environment, you must install one of the MXNet pip packages.

For CPU inference, `mxnet-mkl` is recommended. Install it as follows:

```bash
# Recommended for running Multi Model Server on CPU hosts
pip install mxnet-mkl
```

For GPU inference, `mxnet-cu92mkl` is recommended. Install it as follows:

```bash
# Recommended for running Multi Model Server on GPU hosts
pip install mxnet-cu92mkl
```

**Step 3:** Install or Upgrade MMS as follows:

```bash
# Install latest released version of multi-model-server 
pip install multi-model-server
```

To upgrade from a previous version of `multi-model-server`, please refer [migration reference](docs/migration.md) document.

**Notes:**
* A minimal version of `model-archiver` will be installed with MMS as dependency. See [model-archiver](model-archiver/README.md) for more options and details.
* See the [advanced installation](docs/install.md) page for more options and troubleshooting.

### Serve a Model

Once installed, you can get MMS model server up and running very quickly. Try out `--help` to see all the CLI options available.

```bash
multi-model-server --help
```

For this quick start, we'll skip over most of the features, but be sure to take a look at the [full server docs](docs/server.md) when you're ready.

Here is an easy example for serving an object classification model:
```bash
multi-model-server --start --models squeezenet=https://s3.amazonaws.com/model-server/model_archive_1.0/squeezenet_v1.1.mar
```

With the command above executed, you have MMS running on your host, listening for inference requests. **Please note, that if you specify model(s) during MMS start - it will automatically scale backend workers to the number equal to available vCPUs (if you run on CPU instance) or to the number of available GPUs (if you run on GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs) this start up and autoscaling process might take considerable time. If you would like to minimize MMS start up time you can try to avoid registering and scaling up model during start up time and move that to a later point by using corresponding [Management API](docs/management_api.md#register-a-model) calls (this allows finer grain control to how much resources are allocated for any particular model).**

To test it out, you can open a new terminal window next to the one running MMS. Then you can use `curl` to download one of these [cute pictures of a kitten](https://www.google.com/search?q=cute+kitten&tbm=isch&hl=en&cr=&safe=images) and curl's `-o` flag will name it `kitten.jpg` for you. Then you will `curl` a `POST` to the MMS predict endpoint with the kitten's image.

![kitten](docs/images/kitten_small.jpg)

In the example below, we provide a shortcut for these steps.

```bash
curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl -X POST http://127.0.0.1:8080/predictions/squeezenet -T kitten.jpg
```

The predict endpoint will return a prediction response in JSON. It will look something like the following result:

```json
[
  {
    "probability": 0.8582232594490051,
    "class": "n02124075 Egyptian cat"
  },
  {
    "probability": 0.09159987419843674,
    "class": "n02123045 tabby, tabby cat"
  },
  {
    "probability": 0.0374876894056797,
    "class": "n02123159 tiger cat"
  },
  {
    "probability": 0.006165083032101393,
    "class": "n02128385 leopard, Panthera pardus"
  },
  {
    "probability": 0.0031716004014015198,
    "class": "n02127052 lynx, catamount"
  }
]
```

You will see this result in the response to your `curl` call to the predict endpoint, and in the server logs in the terminal window running MMS. It's also being [logged locally with metrics](docs/metrics.md).

Other models can be downloaded from the [model zoo](docs/model_zoo.md), so try out some of those as well.

Now you've seen how easy it can be to serve a deep learning model with MMS! [Would you like to know more?](docs/server.md)

### Stopping the running model server
To stop the current running model-server instance, run the following command:
```bash
$ multi-model-server --stop
```
You would see output specifying that multi-model-server has stopped.

### Create a Model Archive

MMS enables you to package up all of your model artifacts into a single model archive. This makes it easy to share and deploy your models.
To package a model, check out [model archiver documentation](model-archiver/README.md)

## Recommended production deployments

* MMS doesn't provide authentication. You have to have your own authentication proxy in front of MMS.
* MMS doesn't provide throttling, it's vulnerable to DDoS attack. It's recommended to running MMS behind a firewall.
* MMS only allows localhost access by default, see [Network configuration](docs/configuration.md#configure-mms-listening-port) for detail.
* SSL is not enabled by default, see [Enable SSL](docs/configuration.md#enable-ssl) for detail.
* MMS use a config.properties file to configure MMS's behavior, see [Manage MMS](docs/configuration.md) page for detail of how to configure MMS.
* For better security, we recommend running MMS inside docker container. This project includes Dockerfiles to build containers recommended for production deployments. These containers demonstrate how to customize your own production MMS deployment. The basic usage can be found on the [Docker readme](docker/README.md).

## Other Features

Browse over to the [Docs readme](docs/README.md) for the full index of documentation. This includes more examples, how to customize the API service, API endpoint details, and more.

## External demos powered by MMS

Here are some example demos of deep learning applications, powered by MMS:

 |  |   |
|:------:|:-----------:|
| [Product Review Classification](https://thomasdelteil.github.io/TextClassificationCNNs_MXNet/) <img width="325" alt="demo4" src="https://user-images.githubusercontent.com/3716307/48382335-6099ae00-e695-11e8-8110-f692b9ecb831.png"> |[Visual Search](https://thomasdelteil.github.io/VisualSearch_MXNet/) <img width="325" alt="demo1" src="https://user-images.githubusercontent.com/3716307/48382332-6099ae00-e695-11e8-9fdd-17b5e7d6d0ec.png">|
| [Facial Emotion Recognition](https://thomasdelteil.github.io/FacialEmotionRecognition_MXNet/) <img width="325" alt="demo2" src="https://user-images.githubusercontent.com/3716307/48382333-6099ae00-e695-11e8-8bc6-e2c7dce3527c.png"> |[Neural Style Transfer](https://thomasdelteil.github.io/NeuralStyleTransfer_MXNet/) <img width="325" alt="demo3" src="https://user-images.githubusercontent.com/3716307/48382334-6099ae00-e695-11e8-904a-0906cc0797bc.png"> |

## Contributing

We welcome all contributions!

To file a bug or request a feature, please file a GitHub issue. Pull requests are welcome.


================================================
FILE: _config.yml
================================================
theme: jekyll-theme-cayman

================================================
FILE: benchmarks/README.md
================================================
# Multi Model Server Benchmarking

The benchmarks measure the performance of MMS on various models and benchmarks.  It supports either a number of built-in models or a custom model passed in as a path or URL to the .model file.  It also runs various benchmarks using these models (see benchmarks section below).  The benchmarks are run through a python3 script on the user machine through jmeter.  MMS is run on the same machine in a docker instance to avoid network latencies.  The benchmark must be run from within the context of the full MMS repo because it executes the local code as the version of MMS (and it is recompiled between runs) for ease of development.

## Installation

### Ubuntu

The script is mainly intended to run on a Ubuntu EC2 instance.  For this reason, we have provided an `install_dependencies.sh` script to install everything needed to execute the benchmark on this environment.  All you need to do is run this file and clone the MMS repo.

### MacOS

For mac, you should have python3 and java installed.  If you wish to run the default benchmarks featuring a docker-based instance of MMS, you will need to install docker as well.  Finally, you will need to install jmeter with plugins which can be accomplished by running `mac_install_dependencies.sh`.

### Other

For other environments, manual installation is necessary.  The list of dependencies to be installed can be found below or by reading the ubuntu installation script.

The benchmarking script requires the following to run:
- python3
- A JDK and JRE
- jmeter installed through homebrew or linuxbrew with the plugin manager and the following plugins: jpgc-synthesis=2.1,jpgc-filterresults=2.1,jpgc-mergeresults=2.1,jpgc-cmd=2.1,jpgc-perfmon=2.1
- Docker-ce with the current user added to the docker group
- Nvidia-docker (for GPU)


## Models

The pre-loaded models for the benchmark can be mostly found in the [MMS model zoo](https://github.com/awslabs/multi-model-server/blob/master/docs/model_zoo.md).  We currently support the following:
- [resnet: ResNet-18 (Default)](https://github.com/awslabs/multi-model-server/blob/master/docs/model_zoo.md#resnet-18)
- [squeezenet: SqueezeNet V1.1](https://github.com/awslabs/multi-model-server/blob/master/docs/model_zoo.md#squeezenet_v1.1)
- [lstm: lstm-ptb](https://github.com/awslabs/multi-model-server/blob/master/docs/model_zoo.md#lstm-ptb)
- [noop: noop-v1.0](https://s3.amazonaws.com/model-server/models/noop/noop-v1.0.model) Simple Noop model which returns "Hello world" to any input specified.
- [noop_echo: noop_echo-v1.0](https://s3.amazonaws.com/model-server/models/noop/noop_echo-v1.0.model) Simple Noop model which returns whatever input is given to it.

## Benchmarks

We support several basic benchmarks:
- throughput: Run inference with enough threads to occupy all workers and ensure full saturation of resources to find the throughput.  The number of threads defaults to 100.
- latency: Run inference with a single thread to determine the latency
- ping: Test the throughput of pinging against the frontend
- load: Loads the same model many times in parallel.  The number of loads is given by the "count" option and defaults to 16.
- repeated_scale_calls: Will scale the model up to "scale_up_workers"=16 then down to "scale_down_workers"=1 then up and down repeatedly.
- multiple_models: Loads and scales up three models (1. noop, 2. lstm, and 3. resnet), at the same time, runs inferences on them, and then scales them down.  Use the options "urlN", "modelN_name", "dataN" to specify the model url, model name, and the data to pass to the model respectively.  data1 and data2 are of the format "&apos;Some garbage data being passed here&apos;" and data3 is the filesystem path to a file to upload.

We also support compound benchmarks:
- concurrent_inference: Runs the basic benchmark with different numbers of threads


## Examples

Run basic latency test on default resnet-18 model\
```./benchmark.py latency```


Run basic throughput test on default resnet-18 model.\
```./benchmark.py throughput```


Run all benchmarks\
```./benchmark.py --all```


Run using the noop-v1.0 model\
```./benchmark.py latency -m noop_v1.0```


Run on GPU (4 gpus)\
```./benchmark.py latency -g 4```


Run with a custom image\
```./benchmark.py latency -i {imageFilePath}```


Run with a custom model (works only for CNN based models, which accept image as an input for now. We will add support for more input types in future to this command. )\
```./benchmark.py latency -c {modelUrl} -i {imageFilePath}```


Run with custom options\
```./benchmark.py repeated_scale_calls --options scale_up_workers 100 scale_down_workers 10```


Run against an already running instance of MMS\
```./benchmark.py latency --mms 127.0.0.1``` (defaults to http, port 80, management port = port + 1)\
```./benchmark.py latency --mms 127.0.0.1:8080 --management-port 8081```\
```./benchmark.py latency --mms https://127.0.0.1:8443```


Run verbose with only a single loop\
```./benchmark.py latency -v -l 1```


## Benchmark options

The full list of options can be found by running with the -h or --help flags.


## Profiling

### Frontend

The benchmarks can be used in conjunction with standard profiling tools such as JProfiler to analyze the system performance.  JProfiler can be downloaded from their [website](https://www.ej-technologies.com/products/jprofiler/overview.html).  Once downloaded, open up JProfiler and follow these steps:

1. Run MMS directly through gradle (do not use docker).  This can be done either on your machine or on a remote machine accessible through SSH.
2. In JProfiler, select "Attach" from the ribbon and attach to the ModelServer.  The process name in the attach window should be "com.amazonaws.ml.mms.ModelServer".  If it is on a remote machine, select "On another computer" in the attach window and enter the SSH details.  For the session startup settings, you can leave it with the defaults.  At this point, you should see live CPU and Memory Usage data on JProfiler's Telemetries section.
3. Select Start Recordings in JProfiler's ribbon
4. Run the Benchmark script targeting your running MMS instance.  It might run something like `./benchmark.py throughput --mms https://127.0.0.1:8443`.  It can be run on either your local machine or a remote machine (if you are running remote), but we recommend running the benchmark on the same machine as the model server to avoid confounding network latencies.
5. Once the benchmark script has finished running, select Stop Recordings in JProfiler's ribbon

Once you have stopped recording, you should be able to analyze the data.  One useful section to examine is CPU views > Call Tree and CPU views > Hot Spots to see where the processor time is going.

### Backend

The benchmarks can also be used to analyze the backend performance using cProfile.  It does not require any additional packages to run the benchmark, but viewing the logs does require an additional package.  Run `pip install snakeviz` to install this.  To run the python profiling, follow these steps:

1. In the file `mms/model_service_worker.py`, set the constant BENCHMARK to true at the top to enable benchmarking.
2. Run the benchmark and MMS.  They can either be done automatically inside the docker container or separately with the "--mms" flag.
3. Run MMS directly through gradle (do not use docker).  This can be done either on your machine or on a remote machine accessible through SSH.
4. Run the Benchmark script targeting your running MMS instance.  It might run something like `./benchmark.py throughput --mms https://127.0.0.1:8443`.  It can be run on either your local machine or a remote machine (if you are running remote), but we recommend running the benchmark on the same machine as the model server to avoid confounding network latencies.
5. Run `snakeviz /tmp/mmsPythonProfile.prof` to view the profiling data.  It should start up a web server on your machine and automatically open the page.
6. Don't forget to set BENCHMARK = False in the model_service_worker.py file after you are finished.


================================================
FILE: benchmarks/benchmark.py
================================================
#!/usr/bin/env python3

# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

"""
Execute the MMS Benchmark.  For instructions, run with the --help flag
"""

# pylint: disable=redefined-builtin

import argparse
import itertools
import multiprocessing
import os
import pprint
import shutil
import subprocess
import sys
import time
import traceback
from functools import reduce
from urllib.request import urlretrieve

import pandas as pd

BENCHMARK_DIR = "/tmp/MMSBenchmark/"

OUT_DIR = os.path.join(BENCHMARK_DIR, 'out/')
RESOURCE_DIR = os.path.join(BENCHMARK_DIR, 'resource/')

RESOURCE_MAP = {
    'kitten.jpg': 'https://s3.amazonaws.com/model-server/inputs/kitten.jpg'
}

# Listing out all the JMX files
JMX_IMAGE_INPUT_MODEL_PLAN = 'imageInputModelPlan.jmx'
JMX_TEXT_INPUT_MODEL_PLAN = 'textInputModelPlan.jmx'
JMX_PING_PLAN = 'pingPlan.jmx'
JMX_CONCURRENT_LOAD_PLAN = 'concurrentLoadPlan.jmx'
JMX_CONCURRENT_SCALE_CALLS = 'concurrentScaleCalls.jmx'
JMX_MULTIPLE_MODELS_LOAD_PLAN = 'multipleModelsLoadPlan.jmx'
JMX_GRAPHS_GENERATOR_PLAN = 'graphsGenerator.jmx'

# Listing out the models tested
MODEL_RESNET_18 = 'resnet-18'
MODEL_SQUEEZE_NET = 'squeezenet'
MODEL_LSTM_PTB = 'lstm_ptb'
MODEL_NOOP = 'noop-v1.0'


MODEL_MAP = {
    MODEL_SQUEEZE_NET: (JMX_IMAGE_INPUT_MODEL_PLAN, {'url': 'https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model', 'model_name': MODEL_SQUEEZE_NET, 'input_filepath': 'kitten.jpg'}),
    MODEL_RESNET_18: (JMX_IMAGE_INPUT_MODEL_PLAN, {'url': 'https://s3.amazonaws.com/model-server/models/resnet-18/resnet-18.model', 'model_name': MODEL_RESNET_18, 'input_filepath': 'kitten.jpg'}),
    MODEL_LSTM_PTB: (JMX_TEXT_INPUT_MODEL_PLAN, {'url': 'https://s3.amazonaws.com/model-server/models/lstm_ptb/lstm_ptb.model', 'model_name': MODEL_LSTM_PTB, 'data': 'lstm_ip.json'}),
    MODEL_NOOP: (JMX_TEXT_INPUT_MODEL_PLAN, {'url': 'https://s3.amazonaws.com/model-server/models/noop/noop-v1.0.mar', 'model_name': MODEL_NOOP, 'data': 'noop_ip.txt'})
}


# Mapping of which row is relevant for a given JMX Test Plan
EXPERIMENT_RESULTS_MAP = {
    JMX_IMAGE_INPUT_MODEL_PLAN: ['Inference Request'],
    JMX_TEXT_INPUT_MODEL_PLAN: ['Inference Request'],
    JMX_PING_PLAN: ['Ping Request'],
    JMX_CONCURRENT_LOAD_PLAN: ['Load Model Request'],
    JMX_CONCURRENT_SCALE_CALLS: ['Scale Up Model', 'Scale Down Model'],
    JMX_MULTIPLE_MODELS_LOAD_PLAN: ['Inference Request']
}


JMETER_RESULT_SETTINGS = {
    'jmeter.reportgenerator.overall_granularity': 1000,
    # 'jmeter.reportgenerator.report_title': '"MMS Benchmark Report Dashboard"',
    'aggregate_rpt_pct1': 50,
    'aggregate_rpt_pct2': 90,
    'aggregate_rpt_pct3': 99,
}

# Dictionary of what's present in the output csv generated v/s what we want to change the column name to for readability
AGGREGATE_REPORT_CSV_LABELS_MAP = {
    'aggregate_report_rate': 'Throughput',
    'average': 'Average',
    'aggregate_report_median': 'Median',
    'aggregate_report_90%_line': 'aggregate_report_90_line',
    'aggregate_report_99%_line': 'aggregate_report_99_line',
    'aggregate_report_error%': 'aggregate_report_error'
}


CELLAR = '/home/ubuntu/.linuxbrew/Cellar/jmeter' if 'linux' in sys.platform else '/usr/local/Cellar/jmeter'
JMETER_VERSION = os.listdir(CELLAR)[0]
CMDRUNNER = '{}/{}/libexec/lib/ext/CMDRunner.jar'.format(CELLAR, JMETER_VERSION)
JMETER = '{}/{}/libexec/bin/jmeter'.format(CELLAR, JMETER_VERSION)
MMS_BASE = reduce(lambda val,func: func(val), (os.path.abspath(__file__),) + (os.path.dirname,) * 2)
JMX_BASE = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'jmx')
CONFIG_PROP = os.path.join(MMS_BASE, 'benchmarks', 'config.properties')
CONFIG_PROP_TEMPLATE = os.path.join(MMS_BASE, 'benchmarks', 'config_template.properties')

DOCKER_MMS_BASE = "/multi-model-server"
DOCKER_CONFIG_PROP = os.path.join(DOCKER_MMS_BASE, 'benchmarks', 'config.properties')

# Commenting our NOOPs for now since there's a bug on MMS model loading for .mar files
ALL_BENCHMARKS = list(itertools.product(('latency', 'throughput'), (MODEL_RESNET_18,MODEL_NOOP, MODEL_LSTM_PTB)))
               # + [('multiple_models', MODEL_NOOP)]
               # + list(itertools.product(('load', 'repeated_scale_calls'), (MODEL_RESNET_18,))) \ To Add once
               # repeated_scale_calls is fixed


BENCHMARK_NAMES = ['latency', 'throughput']

class ChDir:
    def __init__(self, path):
        self.curPath = os.getcwd()
        self.path = path

    def __enter__(self):
        os.chdir(self.path)

    def __exit__(self, *args):
        os.chdir(self.curPath)


def basename(path):
    return os.path.splitext(os.path.basename(path))[0]


def get_resource(name):
    url = RESOURCE_MAP[name]
    path = os.path.join(RESOURCE_DIR, name)
    if not os.path.exists(path):
        directory = os.path.dirname(path)
        if not os.path.exists(directory):
            os.makedirs(directory)
        urlretrieve(url, path)
    return path


def run_process(cmd, wait=True, **kwargs):
    output = None if pargs.verbose else subprocess.DEVNULL
    if pargs.verbose:
        print(' '.join(cmd) if isinstance(cmd, list) else cmd)
    if not kwargs.get('shell') and isinstance(cmd, str):
        cmd = cmd.split(' ')
    if 'stdout' not in kwargs:
        kwargs['stdout'] = output
    if 'stderr' not in kwargs:
        kwargs['stderr'] = output
    p = subprocess.Popen(cmd, **kwargs)
    if wait:
        p.wait()
    return p


def run_single_benchmark(jmx, jmeter_args=dict(), threads=100, out_dir=None):
    if out_dir is None:
        out_dir = os.path.join(OUT_DIR, benchmark_name, basename(benchmark_model))
    if os.path.exists(out_dir):
        shutil.rmtree(out_dir)
    os.makedirs(out_dir)

    protocol = 'http'
    hostname = '127.0.0.1'
    port = 8080
    threads = pargs.threads[0] if pargs.threads else threads
    workers = pargs.workers[0] if pargs.workers else (
        pargs.gpus[0] if pargs.gpus else multiprocessing.cpu_count()
    )

    if pargs.mms:
        url = pargs.mms[0]
        if '://' in url:
            protocol, url = url.split('://')
        if ':' in url:
            hostname, port = url.split(':')
            port = int(port)
        else:
            hostname = url
            port = 80
    else:
        # Start MMS
        docker = 'nvidia-docker' if pargs.gpus else 'docker'
        container = 'mms_benchmark_gpu' if pargs.gpus else 'mms_benchmark_cpu'
        docker_path = 'awsdeeplearningteam/multi-model-server:nightly-mxnet-gpu' \
            if pargs.gpus else 'awsdeeplearningteam/multi-model-server:nightly-mxnet-cpu'
        if pargs.docker:
            container = 'mms_benchmark_{}'.format(pargs.docker[0].split('/')[1])
            docker_path = pargs.docker[0]
        run_process("{} rm -f {}".format(docker, container))
        docker_run_call = "{} run --name {} -p 8080:8080 -p 8081:8081 -itd {}".format(docker, container, docker_path)
        run_process(docker_run_call)

    management_port = int(pargs.management[0]) if pargs.management else port + 1
    time.sleep(300)

    try:
        # temp files
        tmpfile = os.path.join(out_dir, 'output.jtl')
        logfile = os.path.join(out_dir, 'jmeter.log')
        outfile = os.path.join(out_dir, 'out.csv')
        perfmon_file = os.path.join(out_dir, 'perfmon.csv')
        graphsDir = os.path.join(out_dir, 'graphs')
        reportDir = os.path.join(out_dir, 'report')

        # run jmeter
        run_jmeter_args = {
            'hostname': hostname,
            'port': port,
            'management_port': management_port,
            'protocol': protocol,
            'min_workers': workers,
            'rampup': 5,
            'threads': threads,
            'loops': int(pargs.loops[0]),
            'perfmon_file': perfmon_file
        }
        run_jmeter_args.update(JMETER_RESULT_SETTINGS)
        run_jmeter_args.update(jmeter_args)
        run_jmeter_args.update(dict(zip(pargs.options[::2], pargs.options[1::2])))
        abs_jmx = jmx if os.path.isabs(jmx) else os.path.join(JMX_BASE, jmx)
        jmeter_args_str = ' '.join(sorted(['-J{}={}'.format(key, val) for key, val in run_jmeter_args.items()]))
        jmeter_call = '{} -n -t {} {} -l {} -j {} -e -o {}'.format(JMETER, abs_jmx, jmeter_args_str, tmpfile, logfile, reportDir)
        run_process(jmeter_call)

        time.sleep(30)
        # run AggregateReport
        ag_call = 'java -jar {} --tool Reporter --generate-csv {} --input-jtl {} --plugin-type AggregateReport'.format(CMDRUNNER, outfile, tmpfile)
        run_process(ag_call)

        # Generate output graphs
        gLogfile = os.path.join(out_dir, 'graph_jmeter.log')
        graphing_args = {
            'raw_output': graphsDir,
            'jtl_input': tmpfile
        }
        graphing_args.update(JMETER_RESULT_SETTINGS)
        gjmx = os.path.join(JMX_BASE, JMX_GRAPHS_GENERATOR_PLAN)
        graphing_args_str = ' '.join(['-J{}={}'.format(key, val) for key, val in graphing_args.items()])
        graphing_call = '{} -n -t {} {} -j {}'.format(JMETER, gjmx, graphing_args_str, gLogfile)
        run_process(graphing_call)

        print("Output available at {}".format(out_dir))
        print("Report generated at {}".format(os.path.join(reportDir, 'index.html')))

        data_frame = pd.read_csv(outfile, index_col=0)
        report = list()
        for val in EXPERIMENT_RESULTS_MAP[jmx]:
            for full_val in [fv for fv in data_frame.index if val in fv]:
                report.append(decorate_metrics(data_frame, full_val))

        return report

    except Exception:  # pylint: disable=broad-except
        traceback.print_exc()


def run_multi_benchmark(key, xs, *args, **kwargs):
    out_dir = os.path.join(OUT_DIR, benchmark_name, basename(benchmark_model))
    if os.path.exists(out_dir):
        shutil.rmtree(out_dir)
    os.makedirs(out_dir)

    reports = dict()
    out_dirs = []
    for i, x in enumerate(xs):
        print("Running value {}={} (value {}/{})".format(key, x, i+1, len(xs)))
        kwargs[key] = x
        sub_out_dir = os.path.join(out_dir, str(i+1))
        out_dirs.append(sub_out_dir)
        report = run_single_benchmark(*args, out_dir=sub_out_dir, **kwargs)
        reports[x] = report

    # files
    merge_results = os.path.join(out_dir, 'merge-results.properties')
    joined = os.path.join(out_dir, 'joined.csv')
    reportDir = os.path.join(out_dir, 'report')

    # merge runs together
    inputJtls = [os.path.join(out_dirs[i], 'output.jtl') for i in range(len(xs))]
    prefixes = ["{} {}: ".format(key, x) for x in xs]
    baseJtl = inputJtls[0]
    basePrefix = prefixes[0]
    for i in range(1, len(xs), 3): # MergeResults only joins up to 4 at a time
        with open(merge_results, 'w') as f:
            curInputJtls = [baseJtl] + inputJtls[i:i+3]
            curPrefixes = [basePrefix] + prefixes[i:i+3]
            for j, (jtl, p) in enumerate(zip(curInputJtls, curPrefixes)):
                f.write("inputJtl{}={}\n".format(j+1, jtl))
                f.write("prefixLabel{}={}\n".format(j+1, p))
                f.write("\n")
        merge_call = 'java -jar {} --tool Reporter --generate-csv joined.csv --input-jtl {} --plugin-type MergeResults'.format(CMDRUNNER, merge_results)
        time.sleep(30)
        run_process(merge_call)
        shutil.move('joined.csv', joined) # MergeResults ignores path given and puts result into cwd
        baseJtl = joined
        basePrefix = ""

    # build report
    time.sleep(30)
    run_process('{} -g {} -o {}'.format(JMETER, joined, reportDir))

    print("Merged output available at {}".format(out_dir))
    print("Merged report generated at {}".format(os.path.join(reportDir, 'index.html')))

    return reports


def parseModel():
    if benchmark_model in MODEL_MAP:
        plan, jmeter_args = MODEL_MAP[benchmark_model]
        for k, v in jmeter_args.items():
            if v in RESOURCE_MAP:
                jmeter_args[k] = get_resource(v)
            if k == 'data':
                jmeter_args[k] = os.path.join(MMS_BASE, 'benchmarks', v)
        if pargs.input:
            jmeter_args['input_filepath'] = pargs.input[0]
    else:
        plan = JMX_IMAGE_INPUT_MODEL_PLAN
        jmeter_args = {
            'url': benchmark_model,
            'model_name': basename(benchmark_model),
            'input_filepath': pargs.input[0]
        }
    return plan, jmeter_args


def decorate_metrics(data_frame, row_to_read):
    temp_dict = data_frame.loc[row_to_read].to_dict()
    result = dict()
    row_name = row_to_read.replace(' ', '_')
    for key, value in temp_dict.items():
        if key in AGGREGATE_REPORT_CSV_LABELS_MAP:
            new_key = '{}_{}_{}_{}'.format(benchmark_name, benchmark_model, row_name, AGGREGATE_REPORT_CSV_LABELS_MAP[key])
            result[new_key] = value
    return result


class Benchmarks:
    """
    Contains benchmarks to run
    """

    @staticmethod
    def throughput():
        """
        Performs a simple single benchmark that measures the model throughput on inference tasks
        """
        plan, jmeter_args = parseModel()
        return run_single_benchmark(plan, jmeter_args)

    @staticmethod
    def latency():
        """
        Performs a simple single benchmark that measures the model latency on inference tasks
        """
        plan, jmeter_args = parseModel()
        return run_single_benchmark(plan, jmeter_args, threads=1)

    @staticmethod
    def ping():
        """
        Performs a simple ping benchmark that measures the throughput for a ping request to the frontend
        """
        return run_single_benchmark(JMX_PING_PLAN, dict(), threads=5000)

    @staticmethod
    def load():
        """
        Benchmarks number of concurrent inference requests
        """
        plan, jmeter_args = parseModel()
        plan = JMX_CONCURRENT_LOAD_PLAN
        jmeter_args['count'] = 8
        return run_single_benchmark(plan, jmeter_args)

    @staticmethod
    def repeated_scale_calls():
        """
        Benchmarks number of concurrent inference requests
        """
        plan, jmeter_args = parseModel()
        plan = JMX_CONCURRENT_SCALE_CALLS
        jmeter_args['scale_up_workers'] = 16
        jmeter_args['scale_down_workers'] = 2
        return run_single_benchmark(plan, jmeter_args)

    @staticmethod
    def multiple_models():
        """
        Tests with 3 models
        """
        plan = JMX_MULTIPLE_MODELS_LOAD_PLAN
        jmeter_args = {
            'url1': MODEL_MAP[MODEL_NOOP][1]['url'],
            'url2': MODEL_MAP[MODEL_LSTM_PTB][1]['url'],
            'url3': MODEL_MAP[MODEL_RESNET_18][1]['url'],
            'model1_name': MODEL_MAP[MODEL_NOOP][1]['model_name'],
            'model2_name': MODEL_MAP[MODEL_LSTM_PTB][1]['model_name'],
            'model3_name': MODEL_MAP[MODEL_RESNET_18][1]['model_name'],
            'data3': get_resource('kitten.jpg')
        }
        return run_single_benchmark(plan, jmeter_args)

    @staticmethod
    def concurrent_inference():
        """
        Benchmarks number of concurrent inference requests
        """
        plan, jmeter_args = parseModel()
        return run_multi_benchmark('threads', range(1, 3*5+1, 3), plan, jmeter_args)


def run_benchmark():
    if hasattr(Benchmarks, benchmark_name):
        print("Running benchmark {} with model {}".format(benchmark_name, benchmark_model))
        res = getattr(Benchmarks, benchmark_name)()
        pprint.pprint(res)
        print('\n')
    else:
        raise Exception("No benchmark benchmark_named {}".format(benchmark_name))


def modify_config_props_for_mms(pargs):
    shutil.copyfile(CONFIG_PROP_TEMPLATE, CONFIG_PROP)
    with open(CONFIG_PROP, 'a') as f:
        f.write('\nnumber_of_netty_threads=32')
        f.write('\njob_queue_size=1000')
        if pargs.gpus:
            f.write('\nnumber_of_gpu={}'.format(pargs.gpus[0]))


if __name__ == '__main__':
    benchmark_name_options = [f for f in dir(Benchmarks) if callable(getattr(Benchmarks, f)) and f[0] != '_']
    parser = argparse.ArgumentParser(prog='multi-model-server-benchmarks', description='Benchmark Multi Model Server')

    target = parser.add_mutually_exclusive_group(required=True)
    target.add_argument('name', nargs='?', type=str, choices=benchmark_name_options, help='The name of the benchmark to run')
    target.add_argument('-a', '--all', action='store_true', help='Run all benchmarks')
    target.add_argument('-s', '--suite', action='store_true', help='Run throughput and latency on a supplied model')

    model = parser.add_mutually_exclusive_group()
    model.add_argument('-m', '--model', nargs=1, type=str, dest='model', default=[MODEL_RESNET_18], choices=MODEL_MAP.keys(), help='A preloaded model to run.  It defaults to {}'.format(MODEL_RESNET_18))
    model.add_argument('-c', '--custom-model', nargs=1, type=str, dest='model', help='The path to a custom model to run.  The input argument must also be passed. Currently broken')

    parser.add_argument('-d', '--docker', nargs=1, type=str, default=None, help='Docker hub path to use')
    parser.add_argument('-i', '--input', nargs=1, type=str, default=None, help='The input to feed to the test')
    parser.add_argument('-g', '--gpus', nargs=1, type=int, default=None, help='Number of gpus.  Leave empty to run CPU only')

    parser.add_argument('-l', '--loops', nargs=1, type=int, default=[10], help='Number of loops to run')
    parser.add_argument('-t', '--threads', nargs=1, type=int, default=None, help='Number of jmeter threads to run')
    parser.add_argument('-w', '--workers', nargs=1, type=int, default=None, help='Number of MMS backend workers to use')

    parser.add_argument('--mms', nargs=1, type=str, help='Target an already running instance of MMS instead of spinning up a docker container of MMS.  Specify the target with the format address:port (for http) or protocol://address:port')
    parser.add_argument('--management-port', dest='management', nargs=1, type=str, help='When targeting a running MMS instance, specify the management port')
    parser.add_argument('-v', '--verbose', action='store_true', help='Display all output')
    parser.add_argument('--options', nargs='*', default=[], help='Additional jmeter arguments.  It should follow the format of --options argname1 argval1 argname2 argval2 ...')
    pargs = parser.parse_args()

    if os.path.exists(OUT_DIR):
        if pargs.all:
            shutil.rmtree(OUT_DIR)
            os.makedirs(OUT_DIR)
    else:
        os.makedirs(OUT_DIR)

    modify_config_props_for_mms(pargs)

    if pargs.suite:
        benchmark_model = pargs.model[0].lower()
        for benchmark_name in BENCHMARK_NAMES:
            run_benchmark()
            if not os.path.isdir(os.path.join(OUT_DIR, benchmark_name, basename(benchmark_model), 'report')):
                run_benchmark()

    elif pargs.all:
        for benchmark_name, benchmark_model in ALL_BENCHMARKS:
            run_benchmark()
    else:
        benchmark_name = pargs.name.lower()
        benchmark_model = pargs.model[0].lower()
        run_benchmark()


================================================
FILE: benchmarks/install_dependencies.sh
================================================
#!/bin/bash

# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

# This file contains the installation setup for running benchmarks on EC2 isntance.
# To run on a machine with GPU : ./install_dependencies True
# To run on a machine with CPU : ./install_dependencies False
set -ex

sudo apt-get update
sudo apt-get -y upgrade
echo "Setting up your Ubuntu machine to load test MMS"
sudo apt-get install -y \
        python \
        python-pip \
        python3-pip \
        python3-tk \
        python-psutil \
        default-jre \
        default-jdk \
        linuxbrew-wrapper \
        build-essential

if [[ $1 = True ]]
then
        echo "Installing pip packages for GPU"
        sudo apt install -y nvidia-cuda-toolkit
        pip install future psutil mxnet-cu92 pillow --user
else
        echo "Installing pip packages for CPU"
        pip install future psutil mxnet pillow --user

fi

pip3 install pandas

echo "Installing JMeter through Brew"
# Script would end on errors, but everything works fine
{
    yes '' | brew update
} || {
    true
}
{
    brew install jmeter --with-plugins
} || {
    true
}

wget https://jmeter-plugins.org/get/ -O /home/ubuntu/.linuxbrew/Cellar/jmeter/5.0/libexec/lib/ext/jmeter-plugins-manager-1.3.jar
wget http://search.maven.org/remotecontent?filepath=kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -O /home/ubuntu/.linuxbrew/Cellar/jmeter/5.0/libexec/lib/cmdrunner-2.2.jar
java -cp /home/ubuntu/.linuxbrew/Cellar/jmeter/5.0/libexec/lib/ext/jmeter-plugins-manager-1.3.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
/home/ubuntu/.linuxbrew/Cellar/jmeter/5.0/libexec/bin/PluginsManagerCMD.sh install jpgc-synthesis=2.1,jpgc-filterresults=2.1,jpgc-mergeresults=2.1,jpgc-cmd=2.1,jpgc-perfmon=2.1

echo "Install docker"
sudo apt-get remove docker docker-engine docker.io
sudo apt-get install -y \
     apt-transport-https \
     ca-certificates \
     curl \
     software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
     "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install -y docker-ce
{
    sudo groupadd docker || {true}
} || {
    true
}
{
    gpasswd -a $USER docker
} || {
    true
}


if [[ $1 = True ]]
then
    echo "Installing nvidia-docker"
    # If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
    {
        docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
    } || {
        true
    }
    {
        sudo apt-get purge -y nvidia-docker
    } || {
        true
    }

    # Add the package repositories
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
        sudo apt-key add -
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
        sudo tee /etc/apt/sources.list.d/nvidia-docker.list
    sudo apt-get update

    # Install nvidia-docker2 and reload the Docker daemon configuration
    sudo apt-get install -y nvidia-docker2
    sudo pkill -SIGHUP dockerd

    # Test nvidia-smi with the latest official CUDA image
    docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
fi


================================================
FILE: benchmarks/jmx/concurrentLoadPlan.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="MMS Benchmarking Concurrent Load Models Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="model_url" elementType="Argument">
            <stringProp name="Argument.name">model_url</stringProp>
            <stringProp name="Argument.value">${__P(url, https://s3.amazonaws.com/model-server/models/resnet-18/resnet-18.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The url from where to fetch the models from</stringProp>
          </elementProp>
          <elementProp name="model" elementType="Argument">
            <stringProp name="Argument.name">model</stringProp>
            <stringProp name="Argument.value">${__P(model_name,resnet-18)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="count" elementType="Argument">
            <stringProp name="Argument.name">count</stringProp>
            <stringProp name="Argument.value">${__P(count,10)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </Arguments>
      <hashTree/>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Concurrent Load Model Requests" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,1)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${count}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Load Model Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments">
              <elementProp name="ctr" elementType="HTTPArgument">
                <boolProp name="HTTPArgument.always_encode">false</boolProp>
                <stringProp name="Argument.value">${ctr}</stringProp>
                <stringProp name="Argument.metadata">=</stringProp>
                <boolProp name="HTTPArgument.use_equals">true</boolProp>
                <stringProp name="Argument.name">ctr</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${model}-${__counter(FALSE, )}&amp;model_name=${model}-${__counter(FALSE,)}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <SyncTimer guiclass="TestBeanGUI" testclass="SyncTimer" testname="Synchronizing Timer" enabled="true">
          <stringProp name="groupSize">${count}</stringProp>
          <longProp name="timeoutInMs">0</longProp>
        </SyncTimer>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/concurrentScaleCalls.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="MMS Benchmarking Concurrent ScaleUp/ScaleDown Models Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="model_url" elementType="Argument">
            <stringProp name="Argument.name">model_url</stringProp>
            <stringProp name="Argument.value">${__P(url, https://s3.amazonaws.com/model-server/models/resnet-18/resnet-18.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The url from where to fetch the models from</stringProp>
          </elementProp>
          <elementProp name="model" elementType="Argument">
            <stringProp name="Argument.name">model</stringProp>
            <stringProp name="Argument.value">${__P(model_name,resnet-18)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">Name of the model to run the tests on</stringProp>
          </elementProp>
          <elementProp name="count" elementType="Argument">
            <stringProp name="Argument.name">count</stringProp>
            <stringProp name="Argument.value">${__P(count,1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="scale_up_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_up_workers</stringProp>
            <stringProp name="Argument.value">${__P(scale_up_workers,1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The number of workers to scale model to</stringProp>
          </elementProp>
          <elementProp name="scale_down_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_down_workers</stringProp>
            <stringProp name="Argument.value">${__P(scale_down_workers,1)}</stringProp>
            <stringProp name="Argument.desc">Scale down the workers</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </Arguments>
      <hashTree/>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <SetupThreadGroup guiclass="SetupThreadGroupGui" testclass="SetupThreadGroup" testname="setup ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </SetupThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${model_url}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Concurrent Scale Up Model/Scale Down Model Requests" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,10)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__P(threads,2)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__P(rampup,5)}</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <SyncTimer guiclass="TestBeanGUI" testclass="SyncTimer" testname="Synchronizing Timer" enabled="true">
          <stringProp name="groupSize">${count}</stringProp>
          <longProp name="timeoutInMs">0</longProp>
        </SyncTimer>
        <hashTree/>
      </hashTree>
      <PostThreadGroup guiclass="PostThreadGroupGui" testclass="PostThreadGroup" testname="tearDown ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </PostThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model}" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=0</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/graphsGenerator.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Graphs Generator" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Graph Gen thread group" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <kg.apc.jmeter.listener.GraphsGeneratorListener guiclass="TestBeanGUI" testclass="kg.apc.jmeter.listener.GraphsGeneratorListener" testname="Graph Generator" enabled="true">
          <boolProp name="aggregateRows">false</boolProp>
          <boolProp name="autoScaleRows">false</boolProp>
          <stringProp name="endOffset"></stringProp>
          <stringProp name="excludeLabels"></stringProp>
          <boolProp name="excludeSamplesWithRegex">false</boolProp>
          <intProp name="exportMode">0</intProp>
          <stringProp name="filePrefix">${__P(graph_prefix,g_)}</stringProp>
          <stringProp name="forceY"></stringProp>
          <stringProp name="granulation">1000</stringProp>
          <intProp name="graphHeight">600</intProp>
          <intProp name="graphWidth">800</intProp>
          <stringProp name="includeLabels">Inference Request</stringProp>
          <boolProp name="includeSamplesWithRegex">false</boolProp>
          <stringProp name="limitRows">150</stringProp>
          <stringProp name="lineWeight"></stringProp>
          <stringProp name="lowCountLimit"></stringProp>
          <stringProp name="outputBaseFolder">${__P(raw_output)}</stringProp>
          <boolProp name="paintGradient">true</boolProp>
          <boolProp name="paintZeroing">true</boolProp>
          <boolProp name="preventOutliers">false</boolProp>
          <boolProp name="relativeTimes">true</boolProp>
          <stringProp name="resultsFileName">${__P(jtl_input)}</stringProp>
          <stringProp name="startOffset"></stringProp>
          <stringProp name="successFilter"></stringProp>
        </kg.apc.jmeter.listener.GraphsGeneratorListener>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/imageInputModelPlan.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="MMS Benchmarking Image Input Model Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="cnn_url" elementType="Argument">
            <stringProp name="Argument.name">cnn_url</stringProp>
            <stringProp name="Argument.value">${__P(url, https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The url from where to fetch noop model from</stringProp>
          </elementProp>
          <elementProp name="scale_up_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_up_workers</stringProp>
            <stringProp name="Argument.value">${__P(min_workers,1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The workers to scale No op model to</stringProp>
          </elementProp>
          <elementProp name="scale_down_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_down_workers</stringProp>
            <stringProp name="Argument.value">0</stringProp>
            <stringProp name="Argument.desc">Offload the No-Op Model</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="model" elementType="Argument">
            <stringProp name="Argument.name">model</stringProp>
            <stringProp name="Argument.value">${__P(model_name,squeezenet_v1.1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </Arguments>
      <hashTree/>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(port,8443)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <SetupThreadGroup guiclass="SetupThreadGroupGui" testclass="SetupThreadGroup" testname="setup ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </SetupThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${cnn_url}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="${model} Inference" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,200)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__P(threads,20)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__P(rampup,5)}</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Inference Request" enabled="true">
          <elementProp name="HTTPsampler.Files" elementType="HTTPFileArgs">
            <collectionProp name="HTTPFileArgs.files">
              <elementProp name="${__P(input_filepath)}" elementType="HTTPFileArg">
                <stringProp name="File.path">${__P(input_filepath)}</stringProp>
                <stringProp name="File.paramname">data</stringProp>
                <stringProp name="File.mimetype">image/jpeg</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/predictions/${model}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">true</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <PostThreadGroup guiclass="PostThreadGroupGui" testclass="PostThreadGroup" testname="tearDown ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </PostThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model}" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/multipleModelsLoadPlan.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Multiple Models ConcurrentLoad and Serve Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="url1" elementType="Argument">
            <stringProp name="Argument.name">url1</stringProp>
            <stringProp name="Argument.value">${__P(url1,noop.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The url from where to fetch noop model from</stringProp>
          </elementProp>
          <elementProp name="url2" elementType="Argument">
            <stringProp name="Argument.name">url2</stringProp>
            <stringProp name="Argument.value">${__P(url2,lstm_ptb)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="url3" elementType="Argument">
            <stringProp name="Argument.name">url3</stringProp>
            <stringProp name="Argument.value">${__P(url3,https://s3.amazonaws.com/model-server/models/resnet-18/resnet-18.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="model1" elementType="Argument">
            <stringProp name="Argument.name">model1</stringProp>
            <stringProp name="Argument.value">${__P(model1_name,noop)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="model2" elementType="Argument">
            <stringProp name="Argument.name">model2</stringProp>
            <stringProp name="Argument.value">${__P(model2_name,lstm_ptb)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="model3" elementType="Argument">
            <stringProp name="Argument.name">model3</stringProp>
            <stringProp name="Argument.value">${__P(model3_name,resnet-18)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="scale_up_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_up_workers</stringProp>
            <stringProp name="Argument.value">${__P(min_workers,1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="scale_down_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_down_workers</stringProp>
            <stringProp name="Argument.value">${__P(scale_down_workers, 0)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </Arguments>
      <hashTree/>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(port,8443)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <SetupThreadGroup guiclass="SetupThreadGroupGui" testclass="SetupThreadGroup" testname="Setup Models" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </SetupThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model1} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${url1}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up ${model1} Model" enabled="true">
          <boolProp name="HTTPSampler.postBodyRaw">true</boolProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments">
              <elementProp name="" elementType="HTTPArgument">
                <boolProp name="HTTPArgument.always_encode">false</boolProp>
                <stringProp name="Argument.value"></stringProp>
                <stringProp name="Argument.metadata">=</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model1}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model2} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${url2}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up ${model2} Model" enabled="true">
          <boolProp name="HTTPSampler.postBodyRaw">true</boolProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments">
              <elementProp name="" elementType="HTTPArgument">
                <boolProp name="HTTPArgument.always_encode">false</boolProp>
                <stringProp name="Argument.value"></stringProp>
                <stringProp name="Argument.metadata">=</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model2}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model3} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${url3}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up ${model3} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model3}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Inference Calls" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,2)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__P(threads,2)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__P(rampup,5)}</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <RandomOrderController guiclass="RandomOrderControllerGui" testclass="RandomOrderController" testname="Random Order Controller For Inference Requests" enabled="true"/>
        <hashTree>
          <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="${model1} Inference Request" enabled="true">
            <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
              <collectionProp name="Arguments.arguments">
                <elementProp name="data" elementType="HTTPArgument">
                  <boolProp name="HTTPArgument.always_encode">false</boolProp>
                  <stringProp name="Argument.value">${__P(data1,&apos;Some garbage data being passed here&apos;)}</stringProp>
                  <stringProp name="Argument.metadata">=</stringProp>
                  <boolProp name="HTTPArgument.use_equals">true</boolProp>
                  <stringProp name="Argument.name">data</stringProp>
                </elementProp>
              </collectionProp>
            </elementProp>
            <stringProp name="HTTPSampler.domain"></stringProp>
            <stringProp name="HTTPSampler.port"></stringProp>
            <stringProp name="HTTPSampler.protocol"></stringProp>
            <stringProp name="HTTPSampler.contentEncoding"></stringProp>
            <stringProp name="HTTPSampler.path">/predictions/${model1}</stringProp>
            <stringProp name="HTTPSampler.method">POST</stringProp>
            <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
            <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
            <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
            <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
            <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
            <stringProp name="HTTPSampler.connect_timeout"></stringProp>
            <stringProp name="HTTPSampler.response_timeout"></stringProp>
          </HTTPSamplerProxy>
          <hashTree/>
          <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="${model2} Inference Request" enabled="true">
            <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
              <collectionProp name="Arguments.arguments">
                <elementProp name="data" elementType="HTTPArgument">
                  <boolProp name="HTTPArgument.always_encode">false</boolProp>
                  <stringProp name="Argument.value">${__P(data2,&apos;Some garbage data being passed here&apos;)}</stringProp>
                  <stringProp name="Argument.metadata">=</stringProp>
                  <boolProp name="HTTPArgument.use_equals">true</boolProp>
                  <stringProp name="Argument.name">data</stringProp>
                </elementProp>
              </collectionProp>
            </elementProp>
            <stringProp name="HTTPSampler.domain"></stringProp>
            <stringProp name="HTTPSampler.port"></stringProp>
            <stringProp name="HTTPSampler.protocol"></stringProp>
            <stringProp name="HTTPSampler.contentEncoding"></stringProp>
            <stringProp name="HTTPSampler.path">/predictions/${model2}</stringProp>
            <stringProp name="HTTPSampler.method">POST</stringProp>
            <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
            <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
            <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
            <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
            <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
            <stringProp name="HTTPSampler.connect_timeout"></stringProp>
            <stringProp name="HTTPSampler.response_timeout"></stringProp>
          </HTTPSamplerProxy>
          <hashTree/>
          <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="${model3} Inference Request" enabled="true">
            <elementProp name="HTTPsampler.Files" elementType="HTTPFileArgs">
              <collectionProp name="HTTPFileArgs.files">
                <elementProp name="${__P(data3)}" elementType="HTTPFileArg">
                  <stringProp name="File.path">${__P(data3)}</stringProp>
                  <stringProp name="File.paramname">data</stringProp>
                  <stringProp name="File.mimetype">image/jpeg</stringProp>
                </elementProp>
              </collectionProp>
            </elementProp>
            <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
              <collectionProp name="Arguments.arguments"/>
            </elementProp>
            <stringProp name="HTTPSampler.domain"></stringProp>
            <stringProp name="HTTPSampler.port"></stringProp>
            <stringProp name="HTTPSampler.protocol"></stringProp>
            <stringProp name="HTTPSampler.contentEncoding"></stringProp>
            <stringProp name="HTTPSampler.path">/predictions/${model3}</stringProp>
            <stringProp name="HTTPSampler.method">POST</stringProp>
            <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
            <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
            <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
            <boolProp name="HTTPSampler.DO_MULTIPART_POST">true</boolProp>
            <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
            <stringProp name="HTTPSampler.connect_timeout"></stringProp>
            <stringProp name="HTTPSampler.response_timeout"></stringProp>
          </HTTPSamplerProxy>
          <hashTree/>
        </hashTree>
      </hashTree>
      <PostThreadGroup guiclass="PostThreadGroupGui" testclass="PostThreadGroup" testname="Tear Down Models" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </PostThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model1} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model1}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model2} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model2}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model3} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model3}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/pingPlan.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="MMS Benchmarking Text Input Model Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(port,8443)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Ping" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,200)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__P(threads,200)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__P(rampup,5)}</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Ping Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/ping</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/jmx/textInputModelPlan.jmx
================================================
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="4.0" jmeter="4.0 r1823414">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="MMS Benchmarking Text Input Model Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="noop_url" elementType="Argument">
            <stringProp name="Argument.name">noop_url</stringProp>
            <stringProp name="Argument.value">${__P(url,noop.model)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The url from where to fetch noop model from</stringProp>
          </elementProp>
          <elementProp name="scale_up_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_up_workers</stringProp>
            <stringProp name="Argument.value">${__P(min_workers,1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
            <stringProp name="Argument.desc">The workers to scale No op model to</stringProp>
          </elementProp>
          <elementProp name="scale_down_workers" elementType="Argument">
            <stringProp name="Argument.name">scale_down_workers</stringProp>
            <stringProp name="Argument.value">0</stringProp>
            <stringProp name="Argument.desc">Offload the No-Op Model</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="model" elementType="Argument">
            <stringProp name="Argument.name">model</stringProp>
            <stringProp name="Argument.value">${__P(model_name,noop)}</stringProp>
            <stringProp name="Argument.desc">Model name</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </Arguments>
      <hashTree/>
      <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement" testname="HTTP Request Defaults" enabled="true">
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
          <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="HTTPSampler.domain">${__P(hostname,127.0.0.1)}</stringProp>
        <stringProp name="HTTPSampler.port">${__P(port,8443)}</stringProp>
        <stringProp name="HTTPSampler.protocol">${__P(protocol,https)}</stringProp>
        <stringProp name="HTTPSampler.contentEncoding"></stringProp>
        <stringProp name="HTTPSampler.path"></stringProp>
        <stringProp name="HTTPSampler.concurrentPool">6</stringProp>
        <stringProp name="HTTPSampler.connect_timeout"></stringProp>
        <stringProp name="HTTPSampler.response_timeout"></stringProp>
      </ConfigTestElement>
      <hashTree/>
      <SetupThreadGroup guiclass="SetupThreadGroupGui" testclass="SetupThreadGroup" testname="setup ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </SetupThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Setup ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models?url=${noop_url}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Up ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_up_workers}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="${model} Inference" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">${__P(loops,200)}</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__P(threads,200)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__P(rampup,5)}</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Inference Request" enabled="true">
          <elementProp name="HTTPsampler.Files" elementType="HTTPFileArgs">
            <collectionProp name="HTTPFileArgs.files">
              <elementProp name="${__P(data,)}" elementType="HTTPFileArg">
                <stringProp name="File.path">${__P(data,)}</stringProp>
                <stringProp name="File.paramname">data</stringProp>
                <stringProp name="File.mimetype">application/json</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/predictions/${model}</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">true</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
      <PostThreadGroup guiclass="PostThreadGroupGui" testclass="PostThreadGroup" testname="tearDown ${model}" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </PostThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Scale Down ${model} Model" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain"></stringProp>
          <stringProp name="HTTPSampler.port">${__P(management_port,8444)}</stringProp>
          <stringProp name="HTTPSampler.protocol"></stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/models/${model}?min_worker=${scale_down_workers}</stringProp>
          <stringProp name="HTTPSampler.method">DELETE</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>


================================================
FILE: benchmarks/lstm_ip.json
================================================
[{"input_sentence": "on the exchange floor as soon as ual stopped trading we <unk> for a panic said one top floor trader"}]

================================================
FILE: benchmarks/mac_install_dependencies.sh
================================================
#!/bin/bash

# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

# This file contains the installation setup for running benchmarks on EC2 isntance.
# To run on a machine with GPU : ./install_dependencies True
# To run on a machine with CPU : ./install_dependencies False
set -ex

echo "Installing JMeter through Brew"
# Script would end on errors, but everything works fine
brew update || {
}
brew install jmeter --with-plugins || {
}

wget https://jmeter-plugins.org/get/ -O /usr/local/Cellar/jmeter/4.0/libexec/lib/ext/jmeter-plugins-manager-1.3.jar
wget http://search.maven.org/remotecontent?filepath=kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -O /usr/local/Cellar/jmeter/4.0/libexec/lib/cmdrunner-2.2.jar
java -cp /usr/local/Cellar/jmeter/4.0/libexec/lib/ext/jmeter-plugins-manager-1.3.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
/usr/local/Cellar/jmeter/4.0/libexec/bin/PluginsManagerCMD.sh install jpgc-synthesis=2.1,jpgc-filterresults=2.1,jpgc-mergeresults=2.1,jpgc-cmd=2.1,jpgc-perfmon=2.1


================================================
FILE: benchmarks/noop_ip.txt
================================================
"[{\"input_sentence\": \"Hello World\"}]"

================================================
FILE: benchmarks/upload_results_to_s3.sh
================================================
#!/usr/bin/env bash

# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

#Author: Piyush Ghai

set -ex

echo "uploading result files to s3"

hw_type=cpu

if [ "$1" = "True" ]
then
    hw_type=gpu
fi

echo `pwd`
cd /tmp/MMSBenchmark/out
echo `pwd`

today=`date +"%m-%d-%y"`
echo "Saving on S3 bucket on s3://benchmarkai-metrics-prod/daily/mms/$hw_type/$today"

for dir in $(ls `pwd`/)
do
    echo $dir
    aws s3 cp $dir/ s3://benchmarkai-metrics-prod/daily/mms/$hw_type/$today/$dir/ --recursive
done

echo "Files uploaded"


================================================
FILE: ci/Dockerfile.python2.7
================================================
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
#

FROM ubuntu:14.04.5

ENV LANG="C.UTF-8"

ENV DOCKER_BUCKET="download.docker.com" \
    DOCKER_VERSION="17.09.0-ce" \
    DOCKER_CHANNEL="stable" \
    DOCKER_SHA256="a9e90a73c3cdfbf238f148e1ec0eaff5eb181f92f35bdd938fd7dab18e1c4647" \
    DIND_COMMIT="3b5fac462d21ca164b3778647420016315289034" \
    DOCKER_COMPOSE_VERSION="1.16.1" \
    GITVERSION_VERSION="3.6.5"

# Install git
RUN set -ex \
    && apt-get update \
    && apt-get install software-properties-common -y --no-install-recommends\
    && apt-add-repository ppa:git-core/ppa \
    && apt-get update \
    && apt-get install git -y --no-install-recommends\
    && git version

RUN set -ex \
    && echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/99use-gzip-compression \
    && apt-get update \
    && apt-get install -y --no-install-recommends wget=1.15-* fakeroot=1.20-* ca-certificates \
        autoconf=2.69-* automake=1:1.14.* less=458-* groff=1.22.* \
        bzip2=1.0.* file=1:5.14-* g++=4:4.8.* gcc=4:4.8.* imagemagick=8:6.7.* \
        libbz2-dev=1.0.* libc6-dev=2.19-* libcurl4-openssl-dev=7.35.* curl=7.35.* \
        libdb-dev=1:5.3.* libevent-dev=2.0.* libffi-dev=3.1~* \
        libgeoip-dev=1.6.* libglib2.0-dev=2.40.* libjpeg-dev=8c-* \
        libkrb5-dev=1.12+* liblzma-dev=5.1.* libmagickcore-dev=8:6.7.* \
        libmagickwand-dev=8:6.7.* libmysqlclient-dev=5.5.* libncurses5-dev=5.9+* \
        libpng12-dev=1.2.* libpq-dev=9.3.* libreadline-dev=6.3-* libsqlite3-dev=3.8.* \
        libssl-dev=1.0.* libtool=2.4.* libwebp-dev=0.4.* libxml2-dev=2.9.* \
        libxslt1-dev=1.1.* libyaml-dev=0.1.* make=3.81-* patch=2.7.* xz-utils=5.1.* \
        zlib1g-dev=1:1.2.* tcl=8.6.* tk=8.6.* \
        e2fsprogs=1.42.* iptables=1.4.* xfsprogs=3.1.* xz-utils=5.1.* \
        mono-mcs=3.2.* libcurl4-openssl-dev=7.35.* liberror-perl=0.17-* unzip=6.0-*\
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

# Download and set up GitVersion
RUN set -ex \
    && wget "https://github.com/GitTools/GitVersion/releases/download/v${GITVERSION_VERSION}/GitVersion_${GITVERSION_VERSION}.zip" -O /tmp/GitVersion_${GITVERSION_VERSION}.zip \
    && mkdir -p /usr/local/GitVersion_${GITVERSION_VERSION} \
    && unzip /tmp/GitVersion_${GITVERSION_VERSION}.zip -d /usr/local/GitVersion_${GITVERSION_VERSION} \
    && rm /tmp/GitVersion_${GITVERSION_VERSION}.zip \
    && echo "mono /usr/local/GitVersion_${GITVERSION_VERSION}/GitVersion.exe /output json /showvariable \$1" >> /usr/local/bin/gitversion \
    && chmod +x /usr/local/bin/gitversion
# Install Docker
RUN set -ex \
    && curl -fSL "https://${DOCKER_BUCKET}/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" -o docker.tgz \
    && echo "${DOCKER_SHA256} *docker.tgz" | sha256sum -c - \
    && tar --extract --file docker.tgz --strip-components 1  --directory /usr/local/bin/ \
    && rm docker.tgz \
    && docker -v \
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
    && addgroup dockremap \
    && useradd -g dockremap dockremap \
    && echo 'dockremap:165536:65536' >> /etc/subuid \
    && echo 'dockremap:165536:65536' >> /etc/subgid \
    && wget "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind" -O /usr/local/bin/dind \
    && curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-Linux-x86_64 > /usr/local/bin/docker-compose \
    && chmod +x /usr/local/bin/dind /usr/local/bin/docker-compose \
# Ensure docker-compose works
    && docker-compose version

VOLUME /var/lib/docker

COPY dockerd-entrypoint.sh /usr/local/bin/

ENV PATH="/usr/local/bin:$PATH" \
    GPG_KEY="C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF" \
    PYTHON_VERSION="2.7.12" \
    PYTHON_PIP_VERSION="8.1.2"

RUN set -ex \
    && apt-get update \
    && apt-get install -y --no-install-recommends tcl-dev tk-dev \
    && rm -rf /var/lib/apt/lists/* \
	\
	&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
	&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
	&& export GNUPGHOME="$(mktemp -d)" \
	&& (gpg --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$GPG_KEY" \
        || gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEY" \
        || gpg --keyserver keyserver.ubuntu.com --recv-keys "$GPG_KEY") \
	&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
	&& rm -r "$GNUPGHOME" python.tar.xz.asc \
	&& mkdir -p /usr/src/python \
	&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
	&& rm python.tar.xz \
	\
	&& cd /usr/src/python \
	&& ./configure \
		--enable-shared \
		--enable-unicode=ucs4 \
	&& make -j$(nproc) \
	&& make install \
	&& ldconfig \
	\
		&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
		&& python2 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
		&& rm /tmp/get-pip.py \
# we use "--force-reinstall" for the case where the version of pip we're trying to install is the same as the version bundled with Python
# ("Requirement already up-to-date: pip==8.1.2 in /usr/local/lib/python3.6/site-packages")
# https://github.com/docker-library/python/pull/143#issuecomment-241032683
	&& pip install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
        && pip install awscli==1.* --no-cache-dir \
# then we use "pip list" to ensure we don't have more than one pip version installed
# https://github.com/docker-library/python/pull/100
	&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
	\
	&& find /usr/local -depth \
		\( \
			\( -type d -a -name test -o -name tests \) \
			-o \
			\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
		\) -exec rm -rf '{}' + \
	&& apt-get purge -y --auto-remove tcl-dev tk-dev \
    && rm -rf /usr/src/python ~/.cache

ENV JAVA_VERSION=8 \
    JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" \
    JDK_VERSION="8u171-b11-2~14.04" \
    JDK_HOME="/usr/lib/jvm/java-8-openjdk-amd64" \
    JRE_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre" \
    ANT_VERSION=1.9.6 \
    MAVEN_VERSION=3.3.3 \
    MAVEN_HOME="/usr/share/maven" \
    MAVEN_CONFIG="/root/.m2" \
    GRADLE_VERSION=2.7 \
    PROPERTIES_COMMON_VERSIION=0.92.37.8 \
    PYTHON_TOOL_VERSION="3.3-*"

# Install Java
RUN set -ex \
    && apt-get update \
    && apt-get install -y software-properties-common=$PROPERTIES_COMMON_VERSIION \
    && add-apt-repository ppa:openjdk-r/ppa \
    && apt-get update \
    && apt-get -y install python-setuptools=$PYTHON_TOOL_VERSION \
    && apt-get -y install openjdk-$JAVA_VERSION-jdk=$JDK_VERSION \
    && apt-get clean \
    # Ensure Java cacerts symlink points to valid location
    && update-ca-certificates -f \
    && mkdir -p /usr/src/ant \
    && wget "http://archive.apache.org/dist/ant/binaries/apache-ant-$ANT_VERSION-bin.tar.gz" -O /usr/src/ant/apache-ant-$ANT_VERSION-bin.tar.gz \
    && tar -xzf /usr/src/ant/apache-ant-$ANT_VERSION-bin.tar.gz -C /usr/local \
    && ln -s /usr/local/apache-ant-$ANT_VERSION/bin/ant /usr/bin/ant \
    && rm -rf /usr/src/ant \
    && mkdir -p /usr/share/maven /usr/share/maven/ref $MAVEN_CONFIG \
    && curl -fsSL "https://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" \
        | tar -xzC /usr/share/maven --strip-components=1 \
    && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn \
    && mkdir -p /usr/src/gradle \
    && wget "https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip" -O /usr/src/gradle/gradle-$GRADLE_VERSION-bin.zip \
    && unzip /usr/src/gradle/gradle-$GRADLE_VERSION-bin.zip -d /usr/local \
    && ln -s /usr/local/gradle-$GRADLE_VERSION/bin/gradle /usr/bin/gradle \
    && rm -rf /usr/src/gradle \
    && rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY m2-settings.xml $MAVEN_CONFIG/settings.xml

# MMS build environment
RUN set -ex \
    && apt-get update \
    && pip install retrying \
    && pip install mock \
    && pip install pytest -U \
    && pip install pylint

# Install protobuf
RUN wget https://github.com/google/protobuf/archive/v3.4.1.zip \
    && unzip v3.4.1.zip && rm v3.4.1.zip \
    && cd protobuf-3.4.1 && ./autogen.sh && ./configure --prefix=/usr && make && make install && cd .. \
    && rm -r protobuf-3.4.1


================================================
FILE: ci/Dockerfile.python3.6
================================================
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#     http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
#

FROM ubuntu:14.04.5

ENV LANG="C.UTF-8"

ENV DOCKER_BUCKET="download.docker.com" \
    DOCKER_VERSION="17.09.0-ce" \
    DOCKER_CHANNEL="stable" \
    DOCKER_SHA256="a9e90a73c3cdfbf238f148e1ec0eaff5eb181f92f35bdd938fd7dab18e1c4647" \
    DIND_COMMIT="3b5fac462d21ca164b3778647420016315289034" \
    DOCKER_COMPOSE_VERSION="1.16.1" \
    GITVERSION_VERSION="3.6.5"

# Install git
RUN set -ex \
    && apt-get update \
    && apt-get install software-properties-common -y --no-install-recommends\
    && apt-add-repository ppa:git-core/ppa \
    && apt-get update \
    && apt-get install git -y --no-install-recommends\
    && git version

RUN set -ex \
    && echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/99use-gzip-compression \
    && apt-get update \
    && apt-get install -y --no-install-recommends wget=1.15-* fakeroot=1.20-* ca-certificates \
        autoconf=2.69-* automake=1:1.14.* less=458-* groff=1.22.* \
        bzip2=1.0.* file=1:5.14-* g++=4:4.8.* gcc=4:4.8.* imagemagick=8:6.7.* \
        libbz2-dev=1.0.* libc6-dev=2.19-* libcurl4-openssl-dev=7.35.* curl=7.35.* \
        libdb-dev=1:5.3.* libevent-dev=2.0.* libffi-dev=3.1~* \
        libgeoip-dev=1.6.* libglib2.0-dev=2.40.* libjpeg-dev=8c-* \
        libkrb5-dev=1.12+* liblzma-dev=5.1.* libmagickcore-dev=8:6.7.* \
        libmagickwand-dev=8:6.7.* libmysqlclient-dev=5.5.* libncurses5-dev=5.9+* \
        libpng12-dev=1.2.* libpq-dev=9.3.* libreadline-dev=6.3-* libsqlite3-dev=3.8.* \
        libssl-dev=1.0.* libtool=2.4.* libwebp-dev=0.4.* libxml2-dev=2.9.* \
        libxslt1-dev=1.1.* libyaml-dev=0.1.* make=3.81-* patch=2.7.* xz-utils=5.1.* \
        zlib1g-dev=1:1.2.* tcl=8.6.* tk=8.6.* \
        e2fsprogs=1.42.* iptables=1.4.* xfsprogs=3.1.* xz-utils=5.1.* \
        mono-mcs=3.2.* libcurl4-openssl-dev=7.35.* liberror-perl=0.17-* unzip=6.0-*\
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

# Download and set up GitVersion
RUN set -ex \
    && wget "https://github.com/GitTools/GitVersion/releases/download/v${GITVERSION_VERSION}/GitVersion_${GITVERSION_VERSION}.zip" -O /tmp/GitVersion_${GITVERSION_VERSION}.zip \
    && mkdir -p /usr/local/GitVersion_${GITVERSION_VERSION} \
    && unzip /tmp/GitVersion_${GITVERSION_VERSION}.zip -d /usr/local/GitVersion_${GITVERSION_VERSION} \
    && rm /tmp/GitVersion_${GITVERSION_VERSION}.zip \
    && echo "mono /usr/local/GitVersion_${GITVERSION_VERSION}/GitVersion.exe /output json /showvariable \$1" >> /usr/local/bin/gitversion \
    && chmod +x /usr/local/bin/gitversion
# Install Docker
RUN set -ex \
    && curl -fSL "https://${DOCKER_BUCKET}/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" -o docker.tgz \
    && echo "${DOCKER_SHA256} *docker.tgz" | sha256sum -c - \
    && tar --extract --file docker.tgz --strip-components 1  --directory /usr/local/bin/ \
    && rm docker.tgz \
    && docker -v \
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
    && addgroup dockremap \
    && useradd -g dockremap dockremap \
    && echo 'dockremap:165536:65536' >> /etc/subuid \
    && echo 'dockremap:165536:65536' >> /etc/subgid \
    && wget "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind" -O /usr/local/bin/dind \
    && curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-Linux-x86_64 > /usr/local/bin/docker-compose \
    && chmod +x /usr/local/bin/dind /usr/local/bin/docker-compose \
# Ensure docker-compose works
    && docker-compose version

VOLUME /var/lib/docker

COPY dockerd-entrypoint.sh /usr/local/bin/

ENV PATH="/usr/local/bin:$PATH" \
    GPG_KEY="0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D" \
    PYTHON_VERSION="3.6.5" \
    PYTHON_PIP_VERSION="10.0.0" \
    LC_ALL=C.UTF-8 \
    LANG=C.UTF-8

RUN apt-get update && apt-get install -y --no-install-recommends \
        tcl-dev tk-dev \
    && rm -rf /var/lib/apt/lists/* \
    \
    && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
	&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
	&& export GNUPGHOME="$(mktemp -d)" \
	&& (gpg --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$GPG_KEY" \
        || gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEY" \
        || gpg --keyserver keyserver.ubuntu.com --recv-keys "$GPG_KEY") \
	&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
	&& rm -r "$GNUPGHOME" python.tar.xz.asc \
	&& mkdir -p /usr/src/python \
	&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
	&& rm python.tar.xz \
	\
	&& cd /usr/src/python \
	&& ./configure \
		--enable-loadable-sqlite-extensions \
		--enable-shared \
	&& make -j$(nproc) \
	&& make install \
	&& ldconfig \
	\
# explicit path to "pip3" to ensure distribution-provided "pip3" cannot interfere
	&& if [ ! -e /usr/local/bin/pip3 ]; then : \
		&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
		&& python3 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
		&& rm /tmp/get-pip.py \
	; fi \
# we use "--force-reinstall" for the case where the version of pip we're trying to install is the same as the version bundled with Python
# ("Requirement already up-to-date: pip==8.1.2 in /usr/local/lib/python3.6/site-packages")
# https://github.com/docker-library/python/pull/143#issuecomment-241032683
	&& pip3 install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
        && pip install awscli==1.* boto3 pipenv virtualenv --no-cache-dir \
# then we use "pip list" to ensure we don't have more than one pip version installed
# https://github.com/docker-library/python/pull/100
	&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
	\
	&& find /usr/local -depth \
		\( \
			\( -type d -a -name test -o -name tests \) \
			-o \
			\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
		\) -exec rm -rf '{}' + \
	&& apt-get purge -y --auto-remove tcl-dev tk-dev \
	&& rm -rf /usr/src/python ~/.cache \
	&& cd /usr/local/bin \
	&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
	&& ln -s idle3 idle \
	&& ln -s pydoc3 pydoc \
	&& ln -s python3 python \
	&& ln -s python3-config python-config \
        && rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*

ENV JAVA_VERSION=8 \
    JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" \
    JDK_VERSION="8u171-b11-2~14.04" \
    JDK_HOME="/usr/lib/jvm/java-8-openjdk-amd64" \
    JRE_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre" \
    ANT_VERSION=1.9.6 \
    MAVEN_VERSION=3.3.3 \
    MAVEN_HOME="/usr/share/maven" \
    MAVEN_CONFIG="/root/.m2" \
    GRADLE_VERSION=2.7 \
    PROPERTIES_COMMON_VERSIION=0.92.37.8 \
    PYTHON_TOOL_VERSION="3.3-*"

# Install Java
RUN set -ex \
    && apt-get update \
    && apt-get install -y software-properties-common=$PROPERTIES_COMMON_VERSIION \
    && add-apt-repository ppa:openjdk-r/ppa \
    && apt-get update \
    && apt-get -y install python-setuptools=$PYTHON_TOOL_VERSION \
    && apt-get -y install openjdk-$JAVA_VERSION-jdk=$JDK_VERSION \
    && apt-get clean \
    # Ensure Java cacerts symlink points to valid location
    && update-ca-certificates -f \
    && mkdir -p /usr/src/ant \
    && wget "http://archive.apache.org/dist/ant/binaries/apache-ant-$ANT_VERSION-bin.tar.gz" -O /usr/src/ant/apache-ant-$ANT_VERSION-bin.tar.gz \
    && tar -xzf /usr/src/ant/apache-ant-$ANT_VERSION-bin.tar.gz -C /usr/local \
    && ln -s /usr/local/apache-ant-$ANT_VERSION/bin/ant /usr/bin/ant \
    && rm -rf /usr/src/ant \
    && mkdir -p /usr/share/maven /usr/share/maven/ref $MAVEN_CONFIG \
    && curl -fsSL "https://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" \
        | tar -xzC /usr/share/maven --strip-components=1 \
    && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn \
    && mkdir -p /usr/src/gradle \
    && wget "https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip" -O /usr/src/gradle/gradle-$GRADLE_VERSION-bin.zip \
    && unzip /usr/src/gradle/gradle-$GRADLE_VERSION-bin.zip -d /usr/local \
    && ln -s /usr/local/gradle-$GRADLE_VERSION/bin/gradle /usr/bin/gradle \
    && rm -rf /usr/src/gradle \
    && rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY m2-settings.xml $MAVEN_CONFIG/settings.xml

# MMS build environment
RUN set -ex \
    && apt-get update \
    && pip install retrying \
    && pip install mock \
    && pip install pytest -U \
    && pip install pylint

# Install protobuf
RUN wget https://github.com/google/protobuf/archive/v3.4.1.zip \
    && unzip v3.4.1.zip && rm v3.4.1.zip \
    && cd protobuf-3.4.1 && ./autogen.sh && ./configure --prefix=/usr && make && make install && cd .. \
    && rm -r protobuf-3.4.1


================================================
FILE: ci/README.md
================================================
# Model Server CI build

Model Server us AWS codebuild for its CI build. This folder contains scripts that needed for AWS codebuild.

## buildspec.yml
buildspec.yml contains MMS build logic which will be used by AWS codebuild.

## Docker images
MMS use customized docker image for its AWS codebuild. To make sure MMS is compatible with
 both Python2 and Python3, we use two build projects. We published two codebuild docker
 images on docker hub:
* awsdeeplearningteam/mms-build:python2.7
* awsdeeplearningteam/mms-build:python3.6

Following files in this folder is used to create the docker images
* Dockerfile.python2.7 - Dockerfile for awsdeeplearningteam/mms-build:python2.7
* Dockerfile.python3.6 - Dockerfile for awsdeeplearningteam/mms-build:python3.6
* dockerd-entrypoint.sh - AWS codebuild entrypoint script, required by AWS codebuild
* m2-settings.xml - Limit with repository can be used by maven/gradle in docker container, provided by AWS codebuild.

## AWS codebuild local
To make it easy for developer debug build issue locally, MMS support AWS codebuild local.
Developer can use following command to build MMS locally:
```bash
$ cd multi-model-server
$ ./run_ci_tests.sh
```

To avoid Pull Request build failure on github, developer should always make sure local build can pass.


================================================
FILE: ci/buildspec.yml
================================================
# Build Spec for AWS CodeBuild CI

version: 0.2

phases:
  install:
    commands:
      - apt-get update
      - apt-get install -y curl
      - pip install pip -U
      - pip install future
      - pip install Pillow
      - pip install pytest==4.0.0
      - pip install wheel
      - pip install twine
      - pip install pytest-mock -U
      - pip install requests
      - pip install -U -e .
      - pip install mxnet==1.5.0
      - cd model-archiver/ && pip install -U -e . && cd ../

  build:
    commands:
      - frontend/gradlew -p frontend build
      - python -m pytest mms/tests/unit_tests
      - cd model-archiver/ && python -m pytest model_archiver/tests/unit_tests && cd ../
      - cd model-archiver/ && python -m pytest model_archiver/tests/integ_tests && cd ../
      - cd serving-sdk/ && mvn clean deploy && cd ../
      # integration test is broken: https://github.com/awslabs/multi-model-server/issues/437
      #- python -m pytest mms/tests/integration_tests
      - pylint -rn --rcfile=./mms/tests/pylintrc mms/.
      - cd model-archiver/ && pylint -rn --rcfile=./model_archiver/tests/pylintrc model_archiver/. && cd ../
      - $NIGHTLYBUILD
      - eval $NIGHTLYUPLOAD

artifacts:
  files:
    - dist/*.whl
    - model_archiver/dist/*.whl
    - frontend/server/build/reports/**/*
    - frontend/modelarchive/build/reports/**/*
    - frontend/cts/build/reports/**/*


================================================
FILE: ci/dockerd-entrypoint.sh
================================================
#!/bin/sh
set -e

/usr/local/bin/dockerd \
	--host=unix:///var/run/docker.sock \
	--host=tcp://127.0.0.1:2375 \
	--storage-driver=overlay &>/var/log/docker.log &


tries=0
d_timeout=60
until docker info >/dev/null 2>&1
do
	if [ "$tries" -gt "$d_timeout" ]; then
                cat /var/log/docker.log
		echo 'Timed out trying to connect to internal docker host.' >&2
		exit 1
	fi
        tries=$(( $tries + 1 ))
	sleep 1
done

eval "$@"


================================================
FILE: ci/m2-settings.xml
================================================
<settings>
  <profiles>
    <profile>
      <id>securecentral</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <!--Override the repository (and pluginRepository) "central" from the
         Maven Super POM -->
      <repositories>
        <repository>
          <id>central</id>
          <url>https://repo1.maven.org/maven2</url>
          <releases>
            <enabled>true</enabled>
          </releases>
        </repository>
      </repositories>
      <pluginRepositories>
        <pluginRepository>
          <id>central</id>
          <url>https://repo1.maven.org/maven2</url>
          <releases>
            <enabled>true</enabled>
          </releases>
        </pluginRepository>
      </pluginRepositories>
    </profile>
  </profiles>
</settings>


================================================
FILE: docker/Dockerfile.cpu
================================================
FROM ubuntu:18.04

ENV PYTHONUNBUFFERED TRUE

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    fakeroot \
    ca-certificates \
    dpkg-dev \
    g++ \
    python3-dev \
    openjdk-8-jdk-headless \
    curl \
    vim \
    && rm -rf /var/lib/apt/lists/* \
    && cd /tmp \
    && curl -O https://bootstrap.pypa.io/get-pip.py \
    && python3 get-pip.py

RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN update-alternatives --install /usr/local/bin/pip pip /usr/local/bin/pip3 1

RUN pip install --no-cache-dir multi-model-server \
    && pip install --no-cache-dir mxnet-mkl==1.4.0

RUN useradd -m model-server \
    && mkdir -p /home/model-server/tmp

COPY dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh
COPY config.properties /home/model-server

RUN chmod +x /usr/local/bin/dockerd-entrypoint.sh \
    && chown -R model-server /home/model-server

EXPOSE 8080 8081

USER model-server
WORKDIR /home/model-server
ENV TEMP=/home/model-server/tmp
ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]
CMD ["serve"]

LABEL maintainer="dantu@amazon.com, rakvas@amazon.com, lufen@amazon.com, dden@amazon.com"


================================================
FILE: docker/Dockerfile.gpu
================================================
FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04

ENV PYTHONUNBUFFERED TRUE

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    fakeroot \
    ca-certificates \
    dpkg-dev \
    g++ \
    python3-dev \
    openjdk-8-jdk-headless \
    curl \
    vim \
    && rm -rf /var/lib/apt/lists/* \
    && cd /tmp \
    && curl -O https://bootstrap.pypa.io/get-pip.py \
    && python3 get-pip.py

RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN update-alternatives --install /usr/local/bin/pip pip /usr/local/bin/pip3 1

RUN pip install --no-cache-dir multi-model-server \
    && pip install --no-cache-dir mxnet-cu92mkl==1.4.0

RUN useradd -m model-server \
    && mkdir -p /home/model-server/tmp

COPY dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh
COPY config.properties /home/model-server

RUN chmod +x /usr/local/bin/dockerd-entrypoint.sh \
    && chown -R model-server /home/model-server

EXPOSE 8080 8081

USER model-server
WORKDIR /home/model-server
ENV TEMP=/home/model-server/tmp
ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]
CMD ["serve"]

LABEL maintainer="dantu@amazon.com, rakvas@amazon.com, lufen@amazon.com, dden@amazon.com"


================================================
FILE: docker/Dockerfile.nightly-cpu
================================================
FROM ubuntu:18.04

ENV PYTHONUNBUFFERED TRUE

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    fakeroot \
    ca-certificates \
    dpkg-dev \
    g++ \
    python3-dev \
    openjdk-8-jdk-headless \
    curl \
    vim \
    && rm -rf /var/lib/apt/lists/* \
    && cd /tmp \
    && curl -O https://bootstrap.pypa.io/get-pip.py \
    && python3 get-pip.py

RUN update-alternatives --install /usr/bin/python python /usr/b
Download .txt
gitextract_mh90caup/

├── .circleci/
│   ├── README.md
│   ├── config.yml
│   ├── images/
│   │   ├── Dockerfile.python2.7
│   │   └── Dockerfile.python3.6
│   └── scripts/
│       ├── linux_build.sh
│       ├── linux_test_api.sh
│       ├── linux_test_benchmark.sh
│       ├── linux_test_modelarchiver.sh
│       ├── linux_test_perf_regression.sh
│       └── linux_test_python.sh
├── .coveragerc
├── .github/
│   └── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── LICENSE
├── LICENSE.txt
├── MANIFEST.in
├── PyPiDescription.rst
├── README.md
├── _config.yml
├── benchmarks/
│   ├── README.md
│   ├── benchmark.py
│   ├── install_dependencies.sh
│   ├── jmx/
│   │   ├── concurrentLoadPlan.jmx
│   │   ├── concurrentScaleCalls.jmx
│   │   ├── graphsGenerator.jmx
│   │   ├── imageInputModelPlan.jmx
│   │   ├── multipleModelsLoadPlan.jmx
│   │   ├── pingPlan.jmx
│   │   └── textInputModelPlan.jmx
│   ├── lstm_ip.json
│   ├── mac_install_dependencies.sh
│   ├── noop_ip.txt
│   └── upload_results_to_s3.sh
├── ci/
│   ├── Dockerfile.python2.7
│   ├── Dockerfile.python3.6
│   ├── README.md
│   ├── buildspec.yml
│   ├── dockerd-entrypoint.sh
│   └── m2-settings.xml
├── docker/
│   ├── Dockerfile.cpu
│   ├── Dockerfile.gpu
│   ├── Dockerfile.nightly-cpu
│   ├── Dockerfile.nightly-gpu
│   ├── README.md
│   ├── advanced-dockerfiles/
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7.nightly
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6
│   │   ├── Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6.nightly
│   │   ├── Dockerfile.base.ubuntu_16_04.py2_7
│   │   ├── Dockerfile.base.ubuntu_16_04.py2_7.nightly
│   │   ├── Dockerfile.base.ubuntu_16_04.py3_6
│   │   ├── Dockerfile.base.ubuntu_16_04.py3_6.nightly
│   │   ├── config.properties
│   │   └── dockerd-entrypoint.sh
│   ├── advanced_settings.md
│   ├── config.properties
│   └── dockerd-entrypoint.sh
├── docs/
│   ├── README.md
│   ├── batch_inference_with_mms.md
│   ├── configuration.md
│   ├── custom_service.md
│   ├── elastic_inference.md
│   ├── images/
│   │   └── helpers/
│   │       └── plugins_sdk_class_uml_diagrams.puml
│   ├── inference_api.md
│   ├── install.md
│   ├── logging.md
│   ├── management_api.md
│   ├── metrics.md
│   ├── migration.md
│   ├── mms_endpoint_plugins.md
│   ├── mms_on_fargate.md
│   ├── model_zoo.md
│   ├── rest_api.md
│   └── server.md
├── examples/
│   ├── README.md
│   ├── densenet_pytorch/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── densenet_service.py
│   │   └── index_to_name.json
│   ├── gluon_alexnet/
│   │   ├── README.md
│   │   ├── gluon_hybrid_alexnet.py
│   │   ├── gluon_imperative_alexnet.py
│   │   ├── gluon_pretrained_alexnet.py
│   │   ├── signature.json
│   │   └── synset.txt
│   ├── gluon_character_cnn/
│   │   ├── README.md
│   │   ├── gluon_crepe.py
│   │   ├── signature.json
│   │   └── synset.txt
│   ├── lstm_ptb/
│   │   ├── README.md
│   │   └── lstm_ptb_service.py
│   ├── metrics_cloudwatch/
│   │   ├── __init__.py
│   │   └── metric_push_example.py
│   ├── model_service_template/
│   │   ├── gluon_base_service.py
│   │   ├── model_handler.py
│   │   ├── mxnet_model_service.py
│   │   ├── mxnet_utils/
│   │   │   ├── __init__.py
│   │   │   ├── image.py
│   │   │   ├── ndarray.py
│   │   │   └── nlp.py
│   │   ├── mxnet_vision_batching.py
│   │   └── mxnet_vision_service.py
│   ├── mxnet_vision/
│   │   └── README.md
│   ├── sockeye_translate/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── config/
│   │   │   └── config.properties
│   │   ├── model_handler.py
│   │   ├── preprocessor.py
│   │   └── sockeye_service.py
│   └── ssd/
│       ├── README.md
│       ├── example_outputs.md
│       └── ssd_service.py
├── frontend/
│   ├── .gitignore
│   ├── README.md
│   ├── build.gradle
│   ├── cts/
│   │   ├── build.gradle
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   └── cts/
│   │           │                       ├── Cts.java
│   │           │                       ├── HttpClient.java
│   │           │                       └── ModelInfo.java
│   │           └── resources/
│   │               └── log4j2.xml
│   ├── gradle/
│   │   └── wrapper/
│   │       ├── gradle-wrapper.jar
│   │       └── gradle-wrapper.properties
│   ├── gradle.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── modelarchive/
│   │   ├── build.gradle
│   │   └── src/
│   │       ├── main/
│   │       │   └── java/
│   │       │       └── com/
│   │       │           └── amazonaws/
│   │       │               └── ml/
│   │       │                   └── mms/
│   │       │                       └── archive/
│   │       │                           ├── DownloadModelException.java
│   │       │                           ├── Hex.java
│   │       │                           ├── InvalidModelException.java
│   │       │                           ├── LegacyManifest.java
│   │       │                           ├── Manifest.java
│   │       │                           ├── ModelArchive.java
│   │       │                           ├── ModelException.java
│   │       │                           ├── ModelNotFoundException.java
│   │       │                           └── ZipUtils.java
│   │       └── test/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   ├── archive/
│   │           │                   │   ├── CoverageTest.java
│   │           │                   │   ├── Exporter.java
│   │           │                   │   └── ModelArchiveTest.java
│   │           │                   └── test/
│   │           │                       └── TestHelper.java
│   │           └── resources/
│   │               └── models/
│   │                   ├── custom-return-code/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── error_batch/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── init-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── invalid_service.py
│   │                   ├── invalid/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── invalid_service.py
│   │                   ├── loading-memory-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── logging/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── noop-no-manifest/
│   │                   │   └── service.py
│   │                   ├── noop-v0.1/
│   │                   │   ├── MANIFEST.json
│   │                   │   ├── noop_service.py
│   │                   │   └── signature.json
│   │                   ├── noop-v1.0/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── noop-v1.0-config-tests/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   ├── prediction-memory-error/
│   │                   │   ├── MAR-INF/
│   │                   │   │   └── MANIFEST.json
│   │                   │   └── service.py
│   │                   └── respheader-test/
│   │                       ├── MAR-INF/
│   │                       │   └── MANIFEST.json
│   │                       └── service.py
│   ├── server/
│   │   ├── build.gradle
│   │   └── src/
│   │       ├── main/
│   │       │   ├── java/
│   │       │   │   └── com/
│   │       │   │       └── amazonaws/
│   │       │   │           └── ml/
│   │       │   │               └── mms/
│   │       │   │                   ├── ModelServer.java
│   │       │   │                   ├── ServerInitializer.java
│   │       │   │                   ├── http/
│   │       │   │                   │   ├── ApiDescriptionRequestHandler.java
│   │       │   │                   │   ├── BadRequestException.java
│   │       │   │                   │   ├── ConflictStatusException.java
│   │       │   │                   │   ├── DescribeModelResponse.java
│   │       │   │                   │   ├── ErrorResponse.java
│   │       │   │                   │   ├── HttpRequestHandler.java
│   │       │   │                   │   ├── HttpRequestHandlerChain.java
│   │       │   │                   │   ├── InferenceRequestHandler.java
│   │       │   │                   │   ├── InternalServerException.java
│   │       │   │                   │   ├── InvalidPluginException.java
│   │       │   │                   │   ├── InvalidRequestHandler.java
│   │       │   │                   │   ├── ListModelsResponse.java
│   │       │   │                   │   ├── ManagementRequestHandler.java
│   │       │   │                   │   ├── MethodNotAllowedException.java
│   │       │   │                   │   ├── RequestTimeoutException.java
│   │       │   │                   │   ├── ResourceNotFoundException.java
│   │       │   │                   │   ├── ServiceUnavailableException.java
│   │       │   │                   │   ├── Session.java
│   │       │   │                   │   ├── StatusResponse.java
│   │       │   │                   │   └── messages/
│   │       │   │                   │       └── RegisterModelRequest.java
│   │       │   │                   ├── metrics/
│   │       │   │                   │   ├── Dimension.java
│   │       │   │                   │   ├── Metric.java
│   │       │   │                   │   ├── MetricCollector.java
│   │       │   │                   │   └── MetricManager.java
│   │       │   │                   ├── openapi/
│   │       │   │                   │   ├── Encoding.java
│   │       │   │                   │   ├── Info.java
│   │       │   │                   │   ├── MediaType.java
│   │       │   │                   │   ├── OpenApi.java
│   │       │   │                   │   ├── OpenApiUtils.java
│   │       │   │                   │   ├── Operation.java
│   │       │   │                   │   ├── Parameter.java
│   │       │   │                   │   ├── Path.java
│   │       │   │                   │   ├── PathParameter.java
│   │       │   │                   │   ├── QueryParameter.java
│   │       │   │                   │   ├── RequestBody.java
│   │       │   │                   │   ├── Response.java
│   │       │   │                   │   └── Schema.java
│   │       │   │                   ├── servingsdk/
│   │       │   │                   │   └── impl/
│   │       │   │                   │       ├── ModelServerContext.java
│   │       │   │                   │       ├── ModelServerModel.java
│   │       │   │                   │       ├── ModelServerRequest.java
│   │       │   │                   │       ├── ModelServerResponse.java
│   │       │   │                   │       ├── ModelWorker.java
│   │       │   │                   │       └── PluginsManager.java
│   │       │   │                   ├── util/
│   │       │   │                   │   ├── ConfigManager.java
│   │       │   │                   │   ├── Connector.java
│   │       │   │                   │   ├── ConnectorType.java
│   │       │   │                   │   ├── JsonUtils.java
│   │       │   │                   │   ├── NettyUtils.java
│   │       │   │                   │   ├── OpenSslKey.java
│   │       │   │                   │   ├── ServerGroups.java
│   │       │   │                   │   ├── codec/
│   │       │   │                   │   │   ├── CodecUtils.java
│   │       │   │                   │   │   ├── ModelRequestEncoder.java
│   │       │   │                   │   │   └── ModelResponseDecoder.java
│   │       │   │                   │   ├── logging/
│   │       │   │                   │   │   └── QLogLayout.java
│   │       │   │                   │   └── messages/
│   │       │   │                   │       ├── BaseModelRequest.java
│   │       │   │                   │       ├── InputParameter.java
│   │       │   │                   │       ├── ModelInferenceRequest.java
│   │       │   │                   │       ├── ModelLoadModelRequest.java
│   │       │   │                   │       ├── ModelWorkerResponse.java
│   │       │   │                   │       ├── Predictions.java
│   │       │   │                   │       ├── RequestInput.java
│   │       │   │                   │       └── WorkerCommands.java
│   │       │   │                   └── wlm/
│   │       │   │                       ├── BatchAggregator.java
│   │       │   │                       ├── Job.java
│   │       │   │                       ├── Model.java
│   │       │   │                       ├── ModelManager.java
│   │       │   │                       ├── WorkLoadManager.java
│   │       │   │                       ├── WorkerInitializationException.java
│   │       │   │                       ├── WorkerLifeCycle.java
│   │       │   │                       ├── WorkerState.java
│   │       │   │                       ├── WorkerStateListener.java
│   │       │   │                       └── WorkerThread.java
│   │       │   └── resources/
│   │       │       └── log4j2.xml
│   │       └── test/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── amazonaws/
│   │           │           └── ml/
│   │           │               └── mms/
│   │           │                   ├── CoverageTest.java
│   │           │                   ├── ModelServerTest.java
│   │           │                   ├── TestUtils.java
│   │           │                   ├── test/
│   │           │                   │   └── TestHelper.java
│   │           │                   └── util/
│   │           │                       └── ConfigManagerTest.java
│   │           └── resources/
│   │               ├── certs.pem
│   │               ├── config.properties
│   │               ├── config_test_env.properties
│   │               ├── describe_api.json
│   │               ├── inference_open_api.json
│   │               ├── key.pem
│   │               ├── keystore.p12
│   │               └── management_open_api.json
│   ├── settings.gradle
│   └── tools/
│       ├── conf/
│       │   ├── checkstyle.xml
│       │   ├── findbugs-exclude.xml
│       │   ├── pmd.xml
│       │   └── suppressions.xml
│       └── gradle/
│           ├── check.gradle
│           ├── formatter.gradle
│           └── launcher.gradle
├── mms/
│   ├── .gitignore
│   ├── __init__.py
│   ├── arg_parser.py
│   ├── configs/
│   │   └── sagemaker_config.properties
│   ├── context.py
│   ├── export_model.py
│   ├── metrics/
│   │   ├── __init__.py
│   │   ├── dimension.py
│   │   ├── metric.py
│   │   ├── metric_collector.py
│   │   ├── metric_encoder.py
│   │   ├── metrics_store.py
│   │   ├── process_memory_metric.py
│   │   ├── system_metrics.py
│   │   └── unit.py
│   ├── model_loader.py
│   ├── model_server.py
│   ├── model_service/
│   │   ├── __init__.py
│   │   ├── gluon_vision_service.py
│   │   ├── model_service.py
│   │   ├── mxnet_model_service.py
│   │   └── mxnet_vision_service.py
│   ├── model_service_worker.py
│   ├── protocol/
│   │   ├── __init__.py
│   │   └── otf_message_handler.py
│   ├── service.py
│   ├── tests/
│   │   ├── README.md
│   │   ├── pylintrc
│   │   └── unit_tests/
│   │       ├── helper/
│   │       │   ├── __init__.py
│   │       │   └── pixel2pixel_service.py
│   │       ├── model_service/
│   │       │   ├── dummy_model/
│   │       │   │   ├── MANIFEST.json
│   │       │   │   └── dummy_model_service.py
│   │       │   ├── test_mxnet_image.py
│   │       │   ├── test_mxnet_ndarray.py
│   │       │   ├── test_mxnet_nlp.py
│   │       │   └── test_service.py
│   │       ├── test_beckend_metric.py
│   │       ├── test_model_loader.py
│   │       ├── test_model_service_worker.py
│   │       ├── test_otf_codec_protocol.py
│   │       ├── test_utils/
│   │       │   ├── MAR-INF/
│   │       │   │   └── MANIFEST.json
│   │       │   ├── dummy_class_model_service.py
│   │       │   └── dummy_func_model_service.py
│   │       ├── test_version.py
│   │       └── test_worker_service.py
│   ├── utils/
│   │   ├── __init__.py
│   │   ├── mxnet/
│   │   │   ├── __init__.py
│   │   │   ├── image.py
│   │   │   ├── ndarray.py
│   │   │   └── nlp.py
│   │   └── timeit_decorator.py
│   └── version.py
├── model-archiver/
│   ├── .coveragerc
│   ├── LICENSE
│   ├── MANIFEST.in
│   ├── PyPiDescription.rst
│   ├── README.md
│   ├── docs/
│   │   └── convert_from_onnx.md
│   ├── model_archiver/
│   │   ├── __init__.py
│   │   ├── arg_parser.py
│   │   ├── manifest_components/
│   │   │   ├── __init__.py
│   │   │   ├── engine.py
│   │   │   ├── manifest.py
│   │   │   ├── model.py
│   │   │   └── publisher.py
│   │   ├── model_archiver_error.py
│   │   ├── model_packaging.py
│   │   ├── model_packaging_utils.py
│   │   ├── tests/
│   │   │   ├── integ_tests/
│   │   │   │   ├── configuration.json
│   │   │   │   ├── resources/
│   │   │   │   │   ├── onnx_model/
│   │   │   │   │   │   ├── model.onnx
│   │   │   │   │   │   └── service.py
│   │   │   │   │   └── regular_model/
│   │   │   │   │       ├── dir/
│   │   │   │   │       │   └── 1.py
│   │   │   │   │       ├── dummy-artifacts.txt
│   │   │   │   │       └── service.py
│   │   │   │   └── test_integration_model_archiver.py
│   │   │   ├── pylintrc
│   │   │   └── unit_tests/
│   │   │       ├── test_model_packaging.py
│   │   │       ├── test_model_packaging_utils.py
│   │   │       └── test_version.py
│   │   └── version.py
│   └── setup.py
├── performance_regression/
│   └── imageInputModelPlan.jmx.yaml
├── plugins/
│   ├── build.gradle
│   ├── endpoints/
│   │   ├── build.gradle
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── software/
│   │           │       └── amazon/
│   │           │           └── ai/
│   │           │               └── mms/
│   │           │                   └── plugins/
│   │           │                       └── endpoint/
│   │           │                           ├── ExecutionParameters.java
│   │           │                           └── Ping.java
│   │           └── resources/
│   │               └── META-INF/
│   │                   └── services/
│   │                       └── software.amazon.ai.mms.servingsdk.ModelServerEndpoint
│   ├── gradle/
│   │   └── wrapper/
│   │       ├── gradle-wrapper.jar
│   │       └── gradle-wrapper.properties
│   ├── gradle.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── settings.gradle
│   └── tools/
│       ├── conf/
│       │   ├── checkstyle.xml
│       │   ├── findbugs-exclude.xml
│       │   ├── pmd.xml
│       │   └── suppressions.xml
│       └── gradle/
│           ├── check.gradle
│           ├── formatter.gradle
│           └── launcher.gradle
├── run_ci_tests.sh
├── run_circleci_tests.py
├── serving-sdk/
│   ├── checkstyle.xml
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   └── java/
│       │       └── software/
│       │           └── amazon/
│       │               └── ai/
│       │                   └── mms/
│       │                       └── servingsdk/
│       │                           ├── Context.java
│       │                           ├── Model.java
│       │                           ├── ModelServerEndpoint.java
│       │                           ├── ModelServerEndpointException.java
│       │                           ├── Worker.java
│       │                           ├── annotations/
│       │                           │   ├── Endpoint.java
│       │                           │   └── helpers/
│       │                           │       └── EndpointTypes.java
│       │                           └── http/
│       │                               ├── Request.java
│       │                               └── Response.java
│       └── test/
│           └── java/
│               └── software/
│                   └── amazon/
│                       └── ai/
│                           └── mms/
│                               └── servingsdk/
│                                   └── ModelServerEndpointTest.java
├── setup.py
├── test/
│   ├── README.md
│   ├── postman/
│   │   ├── environment.json
│   │   ├── https_test_collection.json
│   │   ├── inference_api_test_collection.json
│   │   ├── inference_data.json
│   │   └── management_api_test_collection.json
│   ├── regression_tests.sh
│   └── resources/
│       ├── certs.pem
│       ├── config.properties
│       └── key.pem
└── tests/
    └── performance/
        ├── README.md
        ├── TESTS.md
        ├── agents/
        │   ├── __init__.py
        │   ├── config.ini
        │   ├── configuration.py
        │   ├── metrics/
        │   │   └── __init__.py
        │   ├── metrics_collector.py
        │   ├── metrics_monitoring_inproc.py
        │   ├── metrics_monitoring_server.py
        │   └── utils/
        │       ├── __init__.py
        │       └── process.py
        ├── pylintrc
        ├── requirements.txt
        ├── run_performance_suite.py
        ├── runs/
        │   ├── __init__.py
        │   ├── compare.py
        │   ├── context.py
        │   ├── junit.py
        │   ├── storage.py
        │   └── taurus/
        │       ├── __init__.py
        │       ├── reader.py
        │       └── x2junit.py
        ├── tests/
        │   ├── api_description/
        │   │   ├── api_description.jmx
        │   │   ├── api_description.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── batch_and_single_inference/
        │   │   ├── batch_and_single_inference.jmx
        │   │   ├── batch_and_single_inference.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── batch_inference/
        │   │   ├── batch_inference.jmx
        │   │   ├── batch_inference.yaml
        │   │   └── environments/
        │   │       └── xlarge.yaml
        │   ├── examples_local_criteria/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── examples_local_criteria.jmx
        │   │   └── examples_local_criteria.yaml
        │   ├── examples_local_monitoring/
        │   │   ├── examples_local_monitoring.jmx
        │   │   └── examples_local_monitoring.yaml
        │   ├── examples_remote_criteria/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── examples_remote_criteria.jmx
        │   │   └── examples_remote_criteria.yaml
        │   ├── examples_remote_monitoring/
        │   │   ├── examples_remote_monitoring.jmx
        │   │   └── examples_remote_monitoring.yaml
        │   ├── examples_starter/
        │   │   ├── examples_starter.jmx
        │   │   └── examples_starter.yaml
        │   ├── global_config.yaml
        │   ├── health_check/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── health_check.jmx
        │   │   └── health_check.yaml
        │   ├── inference_multiple_models/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_multiple_models.jmx
        │   │   └── inference_multiple_models.yaml
        │   ├── inference_multiple_worker/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_multiple_worker.jmx
        │   │   └── inference_multiple_worker.yaml
        │   ├── inference_single_worker/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── inference_single_worker.jmx
        │   │   └── inference_single_worker.yaml
        │   ├── list_models/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── list_models.jmx
        │   │   └── list_models.yaml
        │   ├── model_description/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── model_description.jmx
        │   │   └── model_description.yaml
        │   ├── multiple_inference_and_scaling/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── multiple_inference_and_scaling.jmx
        │   │   └── multiple_inference_and_scaling.yaml
        │   ├── register_unregister/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── register_unregister.jmx
        │   │   └── register_unregister.yaml
        │   ├── register_unregister_multiple/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── register_unregister_multiple.jmx
        │   │   └── register_unregister_multiple.yaml
        │   ├── scale_down_workers/
        │   │   ├── environments/
        │   │   │   └── xlarge.yaml
        │   │   ├── scale_down_workers.jmx
        │   │   └── scale_down_workers.yaml
        │   └── scale_up_workers/
        │       ├── environments/
        │       │   └── xlarge.yaml
        │       ├── scale_up_workers.jmx
        │       └── scale_up_workers.yaml
        └── utils/
            ├── __init__.py
            ├── fs.py
            ├── pyshell.py
            └── timer.py
Download .txt
SYMBOL INDEX (1721 symbols across 209 files)

FILE: benchmarks/benchmark.py
  class ChDir (line 118) | class ChDir:
    method __init__ (line 119) | def __init__(self, path):
    method __enter__ (line 123) | def __enter__(self):
    method __exit__ (line 126) | def __exit__(self, *args):
  function basename (line 130) | def basename(path):
  function get_resource (line 134) | def get_resource(name):
  function run_process (line 145) | def run_process(cmd, wait=True, **kwargs):
  function run_single_benchmark (line 161) | def run_single_benchmark(jmx, jmeter_args=dict(), threads=100, out_dir=N...
  function run_multi_benchmark (line 263) | def run_multi_benchmark(key, xs, *args, **kwargs):
  function parseModel (line 314) | def parseModel():
  function decorate_metrics (line 334) | def decorate_metrics(data_frame, row_to_read):
  class Benchmarks (line 345) | class Benchmarks:
    method throughput (line 351) | def throughput():
    method latency (line 359) | def latency():
    method ping (line 367) | def ping():
    method load (line 374) | def load():
    method repeated_scale_calls (line 384) | def repeated_scale_calls():
    method multiple_models (line 395) | def multiple_models():
    method concurrent_inference (line 412) | def concurrent_inference():
  function run_benchmark (line 420) | def run_benchmark():
  function modify_config_props_for_mms (line 430) | def modify_config_props_for_mms(pargs):

FILE: examples/densenet_pytorch/densenet_service.py
  class PyTorchImageClassifier (line 12) | class PyTorchImageClassifier():
    method __init__ (line 18) | def __init__(self):
    method initialize (line 26) | def initialize(self, context):
    method preprocess (line 60) | def preprocess(self, data):
    method inference (line 80) | def inference(self, img, topk=5):
    method postprocess (line 104) | def postprocess(self, inference_output):
  function handle (line 112) | def handle(data, context):

FILE: examples/gluon_alexnet/gluon_hybrid_alexnet.py
  class GluonHybridAlexNet (line 20) | class GluonHybridAlexNet(HybridBlock):
    method __init__ (line 24) | def __init__(self, classes=1000, **kwargs):
    method hybrid_forward (line 55) | def hybrid_forward(self, F, x):
  class HybridAlexnetService (line 61) | class HybridAlexnetService(GluonBaseService):
    method initialize (line 65) | def initialize(self, params):
    method postprocess (line 71) | def postprocess(self, data):
  function hybrid_gluon_alexnet_inf (line 80) | def hybrid_gluon_alexnet_inf(data, context):

FILE: examples/gluon_alexnet/gluon_imperative_alexnet.py
  class GluonImperativeAlexNet (line 20) | class GluonImperativeAlexNet(gluon.Block):
    method __init__ (line 24) | def __init__(self, classes=1000, **kwargs):
    method forward (line 54) | def forward(self, x):
  class ImperativeAlexnetService (line 60) | class ImperativeAlexnetService(GluonBaseService):
    method initialize (line 65) | def initialize(self, params):
    method postprocess (line 70) | def postprocess(self, data):
  function imperative_gluon_alexnet_inf (line 79) | def imperative_gluon_alexnet_inf(data, context):

FILE: examples/gluon_alexnet/gluon_pretrained_alexnet.py
  class PretrainedAlexnetService (line 19) | class PretrainedAlexnetService(GluonBaseService):
    method initialize (line 23) | def initialize(self, params):
    method postprocess (line 32) | def postprocess(self, data):
  function pretrained_gluon_alexnet (line 46) | def pretrained_gluon_alexnet(data, context):

FILE: examples/gluon_character_cnn/gluon_crepe.py
  class GluonCrepe (line 21) | class GluonCrepe(HybridBlock):
    method __init__ (line 26) | def __init__(self, classes=7, **kwargs):
    method hybrid_forward (line 49) | def hybrid_forward(self, F, x):
  class CharacterCNNService (line 55) | class CharacterCNNService(object):
    method __init__ (line 60) | def __init__(self):
    method initialize (line 69) | def initialize(self, params):
    method preprocess (line 92) | def preprocess(self, data):
    method inference (line 112) | def inference(self, data):
    method postprocess (line 117) | def postprocess(self, data):
    method predict (line 125) | def predict(self, data):
  function crepe_inference (line 134) | def crepe_inference(data, context):

FILE: examples/lstm_ptb/lstm_ptb_service.py
  class MXNetLSTMService (line 10) | class MXNetLSTMService(ModelHandler):
    method __init__ (line 16) | def __init__(self):
    method initialize (line 34) | def initialize(self, context):
    method preprocess (line 107) | def preprocess(self, data):
    method inference (line 128) | def inference(self, data):
    method postprocess (line 135) | def postprocess(self, data):
  function handle (line 150) | def handle(data, context):

FILE: examples/metrics_cloudwatch/metric_push_example.py
  function generate_system_metrics (line 23) | def generate_system_metrics(mod):
  function push_cloudwatch (line 38) | def push_cloudwatch(metric_json, client):
  function connect_cloudwatch (line 59) | def connect_cloudwatch():

FILE: examples/model_service_template/gluon_base_service.py
  class GluonBaseService (line 21) | class GluonBaseService(object):
    method __init__ (line 28) | def __init__(self):
    method initialize (line 38) | def initialize(self, params):
    method preprocess (line 73) | def preprocess(self, data):
    method inference (line 110) | def inference(self, data):
    method postprocess (line 129) | def postprocess(self, data):
    method predict (line 135) | def predict(self, data):

FILE: examples/model_service_template/model_handler.py
  class ModelHandler (line 18) | class ModelHandler(object):
    method __init__ (line 23) | def __init__(self):
    method initialize (line 29) | def initialize(self, context):
    method preprocess (line 40) | def preprocess(self, batch):
    method inference (line 50) | def inference(self, model_input):
    method postprocess (line 59) | def postprocess(self, inference_output):
    method handle (line 68) | def handle(self, data, context):

FILE: examples/model_service_template/mxnet_model_service.py
  class MXNetModelService (line 23) | class MXNetModelService(ModelHandler):
    method __init__ (line 30) | def __init__(self):
    method get_model_files_prefix (line 39) | def get_model_files_prefix(self, context):
    method initialize (line 42) | def initialize(self, context):
    method preprocess (line 99) | def preprocess(self, batch):
    method inference (line 123) | def inference(self, model_input):
    method postprocess (line 152) | def postprocess(self, inference_output):
  function check_input_shape (line 159) | def check_input_shape(inputs, signature):

FILE: examples/model_service_template/mxnet_utils/image.py
  function transform_shape (line 24) | def transform_shape(img_arr, dim_order='NCHW'):
  function read (line 45) | def read(buf, flag=1, to_rgb=True, out=None):
  function write (line 74) | def write(img_arr, flag=1, output_format='jpeg', dim_order='CHW'):
  function resize (line 108) | def resize(src, new_width, new_height, interp=2):
  function fixed_crop (line 145) | def fixed_crop(src, x0, y0, w, h, size=None, interp=2):
  function color_normalize (line 170) | def color_normalize(src, mean, std=None):

FILE: examples/model_service_template/mxnet_utils/ndarray.py
  function top_probability (line 18) | def top_probability(data, labels, top=5):

FILE: examples/model_service_template/mxnet_utils/nlp.py
  function encode_sentences (line 20) | def encode_sentences(sentences, vocab=None, invalid_label=-1, invalid_ke...
  function pad_sentence (line 70) | def pad_sentence(sentence, buckets, invalid_label=-1, data_name='data', ...

FILE: examples/model_service_template/mxnet_vision_batching.py
  class MXNetVisionServiceBatching (line 19) | class MXNetVisionServiceBatching(object):
    method __init__ (line 20) | def __init__(self):
    method top_probability (line 34) | def top_probability(self, data, labels, top=5):
    method initialize (line 55) | def initialize(self, context):
    method inference (line 111) | def inference(self, model_input):
    method preprocess (line 129) | def preprocess(self, request):
    method postprocess (line 183) | def postprocess(self, data):
  function handle (line 196) | def handle(data, context):

FILE: examples/model_service_template/mxnet_vision_service.py
  class MXNetVisionService (line 20) | class MXNetVisionService(MXNetModelService):
    method preprocess (line 28) | def preprocess(self, request):
    method postprocess (line 69) | def postprocess(self, data):
  function handle (line 82) | def handle(data, context):

FILE: examples/sockeye_translate/model_handler.py
  class ModelHandler (line 17) | class ModelHandler(object):
    method __init__ (line 22) | def __init__(self):
    method initialize (line 28) | def initialize(self, context):
    method preprocess (line 39) | def preprocess(self, batch):
    method inference (line 49) | def inference(self, model_input):
    method postprocess (line 58) | def postprocess(self, inference_output):
    method handle (line 67) | def handle(self, data, context):

FILE: examples/sockeye_translate/preprocessor.py
  class Preprocessor (line 11) | class Preprocessor(object):
    method __init__ (line 12) | def __init__(self, bpe_code_file):
    method unescape (line 49) | def unescape(self, line):
    method bpe_encode (line 69) | def bpe_encode(self, text):
  class JoshuaPreprocessor (line 73) | class JoshuaPreprocessor(Preprocessor):
    method __init__ (line 74) | def __init__(self, bpe_code_file, joshua_path, moses_path, lang):
    method run (line 85) | def run(self, text):
  class ChineseCharPreprocessor (line 97) | class ChineseCharPreprocessor(JoshuaPreprocessor):
    method __init__ (line 98) | def __init__(self, bpe_code_file, joshua_path, moses_path):
    method run (line 105) | def run(self, text):
  class Detokenizer (line 124) | class Detokenizer():
    method __init__ (line 125) | def __init__(self, path):
    method run (line 131) | def run(self, text):

FILE: examples/sockeye_translate/sockeye_service.py
  function decode_bytes (line 16) | def decode_bytes(data):
  function get_text (line 29) | def get_text(req):
  function get_file_data (line 46) | def get_file_data(req):
  function read_sockeye_args (line 61) | def read_sockeye_args(params_path):
  class SockeyeService (line 77) | class SockeyeService(ModelHandler):
    method __init__ (line 82) | def __init__(self):
    method initialize (line 91) | def initialize(self, context):
    method preprocess (line 190) | def preprocess(self, batch):
    method inference (line 214) | def inference(self, texts):
    method postprocess (line 239) | def postprocess(self, outputs):
  function handle (line 258) | def handle(data, context):

FILE: examples/ssd/ssd_service.py
  class SSDService (line 17) | class SSDService(MXNetVisionService):
    method __init__ (line 24) | def __init__(self):
    method preprocess (line 39) | def preprocess(self, batch):
    method postprocess (line 63) | def postprocess(self, data):
  function handle (line 106) | def handle(data, context):

FILE: frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/Cts.java
  class Cts (line 34) | public final class Cts {
    method Cts (line 41) | private Cts() {
    method main (line 45) | public static void main(String[] args) {
    method startTest (line 52) | private void startTest() {
    method runTest (line 104) | private void runTest(HttpClient client, ModelInfo info, Logger logger)
    method predict (line 129) | private boolean predict(HttpClient client, int type, String modelName)
    method loadImage (line 167) | private byte[] loadImage(String path, String fileName) throws IOExcept...
    method updateLog4jConfiguration (line 177) | private static void updateLog4jConfiguration() {

FILE: frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/HttpClient.java
  class HttpClient (line 47) | public class HttpClient {
    method HttpClient (line 57) | public HttpClient(int managementPort, int inferencePort) {
    method registerModel (line 64) | public boolean registerModel(String modelName, String modelUrl)
    method unregisterModel (line 91) | public boolean unregisterModel(String modelName) throws InterruptedExc...
    method predict (line 114) | public boolean predict(String modelName, byte[] content, CharSequence ...
    method predict (line 142) | public boolean predict(
    method bootstrap (line 167) | private Bootstrap bootstrap(ClientHandler handler) {
    method connect (line 188) | private Channel connect(Bootstrap b, int port) throws InterruptedExcep...
    class ClientHandler (line 193) | @ChannelHandler.Sharable
      method ClientHandler (line 199) | public ClientHandler() {}
      method getStatusCode (line 201) | public int getStatusCode() {
      method getContent (line 205) | public String getContent() {
      method channelRead0 (line 209) | @Override
      method exceptionCaught (line 216) | @Override

FILE: frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/ModelInfo.java
  class ModelInfo (line 15) | public class ModelInfo {
    method ModelInfo (line 144) | public ModelInfo(String modelName) {
    method ModelInfo (line 148) | public ModelInfo(String modelName, int type) {
    method ModelInfo (line 152) | public ModelInfo(boolean legacy, String modelName) {
    method ModelInfo (line 156) | public ModelInfo(boolean legacy, String modelName, int type) {
    method ModelInfo (line 166) | public ModelInfo(String modelName, String url) {
    method ModelInfo (line 170) | public ModelInfo(String modelName, String url, int type) {
    method getModelName (line 176) | public String getModelName() {
    method getUrl (line 180) | public String getUrl() {
    method getType (line 184) | public int getType() {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/DownloadModelException.java
  class DownloadModelException (line 15) | public class DownloadModelException extends ModelException {
    method DownloadModelException (line 25) | public DownloadModelException(String message) {
    method DownloadModelException (line 41) | public DownloadModelException(String message, Throwable cause) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/Hex.java
  class Hex (line 15) | public final class Hex {
    method Hex (line 21) | private Hex() {}
    method toHexString (line 23) | public static String toHexString(byte[] block) {
    method toHexString (line 27) | public static String toHexString(byte[] block, int offset, int len) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/InvalidModelException.java
  class InvalidModelException (line 15) | public class InvalidModelException extends ModelException {
    method InvalidModelException (line 25) | public InvalidModelException(String message) {
    method InvalidModelException (line 41) | public InvalidModelException(String message, Throwable cause) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/LegacyManifest.java
  class LegacyManifest (line 18) | public class LegacyManifest {
    method LegacyManifest (line 41) | public LegacyManifest() {}
    method getEngine (line 43) | public Map<String, Object> getEngine() {
    method setEngine (line 47) | public void setEngine(Map<String, Object> engine) {
    method getDescription (line 51) | public String getDescription() {
    method setDescription (line 55) | public void setDescription(String description) {
    method getLicense (line 59) | public String getLicense() {
    method setLicense (line 63) | public void setLicense(String license) {
    method getVersion (line 67) | public String getVersion() {
    method setVersion (line 71) | public void setVersion(String version) {
    method getServerVersion (line 75) | public String getServerVersion() {
    method setServerVersion (line 79) | public void setServerVersion(String serverVersion) {
    method getModelInfo (line 83) | public ModelInfo getModelInfo() {
    method setModelInfo (line 87) | public void setModelInfo(ModelInfo modelInfo) {
    method getCreatedBy (line 91) | public CreatedBy getCreatedBy() {
    method setCreatedBy (line 95) | public void setCreatedBy(CreatedBy createdBy) {
    method migrate (line 99) | public Manifest migrate() throws InvalidModelException {
    class CreatedBy (line 138) | public static final class CreatedBy {
      method CreatedBy (line 146) | public CreatedBy() {}
      method getAuthor (line 148) | public String getAuthor() {
      method setAuthor (line 152) | public void setAuthor(String author) {
      method getEmail (line 156) | public String getEmail() {
      method setEmail (line 160) | public void setEmail(String email) {
    class ModelInfo (line 165) | public static final class ModelInfo {
      method ModelInfo (line 182) | public ModelInfo() {}
      method getParameters (line 184) | public String getParameters() {
      method setParameters (line 188) | public void setParameters(String parameters) {
      method getSymbol (line 192) | public String getSymbol() {
      method setSymbol (line 196) | public void setSymbol(String symbol) {
      method getDescription (line 200) | public String getDescription() {
      method setDescription (line 204) | public void setDescription(String description) {
      method getModelName (line 208) | public String getModelName() {
      method setModelName (line 212) | public void setModelName(String modelName) {
      method getService (line 216) | public String getService() {
      method setService (line 220) | public void setService(String service) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/Manifest.java
  class Manifest (line 19) | public class Manifest {
    method Manifest (line 31) | public Manifest() {
    method getSpecificationVersion (line 40) | public String getSpecificationVersion() {
    method setSpecificationVersion (line 44) | public void setSpecificationVersion(String specificationVersion) {
    method getImplementationVersion (line 48) | public String getImplementationVersion() {
    method setImplementationVersion (line 52) | public void setImplementationVersion(String implementationVersion) {
    method getDescription (line 56) | public String getDescription() {
    method setDescription (line 60) | public void setDescription(String description) {
    method getModelServerVersion (line 64) | public String getModelServerVersion() {
    method setModelServerVersion (line 68) | public void setModelServerVersion(String modelServerVersion) {
    method getLicense (line 72) | public String getLicense() {
    method setLicense (line 76) | public void setLicense(String license) {
    method getRuntime (line 80) | public RuntimeType getRuntime() {
    method setRuntime (line 84) | public void setRuntime(RuntimeType runtime) {
    method getEngine (line 88) | public Engine getEngine() {
    method setEngine (line 92) | public void setEngine(Engine engine) {
    method getModel (line 96) | public Model getModel() {
    method setModel (line 100) | public void setModel(Model model) {
    method getPublisher (line 104) | public Publisher getPublisher() {
    method setPublisher (line 108) | public void setPublisher(Publisher publisher) {
    class Publisher (line 112) | public static final class Publisher {
      method Publisher (line 117) | public Publisher() {}
      method getAuthor (line 119) | public String getAuthor() {
      method setAuthor (line 123) | public void setAuthor(String author) {
      method getEmail (line 127) | public String getEmail() {
      method setEmail (line 131) | public void setEmail(String email) {
    class Engine (line 136) | public static final class Engine {
      method Engine (line 141) | public Engine() {}
      method getEngineName (line 143) | public String getEngineName() {
      method setEngineName (line 147) | public void setEngineName(String engineName) {
      method getEngineVersion (line 151) | public String getEngineVersion() {
      method setEngineVersion (line 155) | public void setEngineVersion(String engineVersion) {
    class Model (line 160) | public static final class Model {
      method Model (line 168) | public Model() {}
      method getModelName (line 170) | public String getModelName() {
      method setModelName (line 174) | public void setModelName(String modelName) {
      method getDescription (line 178) | public String getDescription() {
      method setDescription (line 182) | public void setDescription(String description) {
      method getModelVersion (line 186) | public String getModelVersion() {
      method setModelVersion (line 190) | public void setModelVersion(String modelVersion) {
      method getExtensions (line 194) | public Map<String, Object> getExtensions() {
      method setExtensions (line 198) | public void setExtensions(Map<String, Object> extensions) {
      method addExtension (line 202) | public void addExtension(String key, Object value) {
      method getHandler (line 209) | public String getHandler() {
      method setHandler (line 213) | public void setHandler(String handler) {
    type RuntimeType (line 218) | public enum RuntimeType {
      method RuntimeType (line 228) | RuntimeType(String value) {
      method getValue (line 232) | public String getValue() {
      method fromValue (line 236) | public static RuntimeType fromValue(String value) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelArchive.java
  class ModelArchive (line 46) | public class ModelArchive {
    method ModelArchive (line 62) | public ModelArchive(Manifest manifest, String url, File modelDir, bool...
    method downloadModel (line 69) | public static ModelArchive downloadModel(String modelStore, String url)
    method migrate (line 97) | public static void migrate(File legacyModelFile, File destination)
    method download (line 145) | private static File download(String path) throws ModelException, IOExc...
    method load (line 186) | private static ModelArchive load(String url, File dir, boolean extracted)
    method readFile (line 228) | private static <T> T readFile(File file, Class<T> type)
    method findFile (line 237) | private static File findFile(File dir, String fileName, boolean recurs...
    method moveToTopLevel (line 255) | private static void moveToTopLevel(File from, File to) throws IOExcept...
    method unzip (line 268) | public static File unzip(InputStream is, String eTag) throws IOExcepti...
    method validate (line 299) | public void validate() throws InvalidModelException {
    method getHandler (line 327) | public String getHandler() {
    method getManifest (line 331) | public Manifest getManifest() {
    method getUrl (line 335) | public String getUrl() {
    method getModelDir (line 339) | public File getModelDir() {
    method getModelName (line 343) | public String getModelName() {
    method clean (line 347) | public void clean() {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelException.java
  class ModelException (line 15) | public class ModelException extends Exception {
    method ModelException (line 25) | public ModelException(String message) {
    method ModelException (line 41) | public ModelException(String message, Throwable cause) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelNotFoundException.java
  class ModelNotFoundException (line 15) | public class ModelNotFoundException extends ModelException {
    method ModelNotFoundException (line 25) | public ModelNotFoundException(String message) {
    method ModelNotFoundException (line 41) | public ModelNotFoundException(String message, Throwable cause) {

FILE: frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ZipUtils.java
  class ZipUtils (line 28) | public final class ZipUtils {
    method ZipUtils (line 30) | private ZipUtils() {}
    method zip (line 32) | public static void zip(File src, File dest, boolean includeRootDir) th...
    method unzip (line 42) | public static void unzip(File src, File dest) throws IOException {
    method unzip (line 46) | public static void unzip(InputStream is, File dest) throws IOException {
    method addToZip (line 65) | public static void addToZip(int prefix, File file, FileFilter filter, ...

FILE: frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/CoverageTest.java
  class CoverageTest (line 19) | public class CoverageTest {
    method test (line 21) | @Test

FILE: frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/Exporter.java
  class Exporter (line 33) | public final class Exporter {
    method Exporter (line 37) | private Exporter() {}
    method main (line 39) | public static void main(String[] args) {
    method printHelp (line 176) | private static void printHelp(String message, Options options) {
    method getJarName (line 183) | private static String getJarName() {
    method findUniqueFile (line 195) | private static File findUniqueFile(File[] list, String extension) thro...
    class Config (line 209) | private static final class Config {
      method Config (line 217) | public Config(CommandLine cmd) {
      method getOptions (line 226) | public static Options getOptions() {
      method getModelName (line 279) | public String getModelName() {
      method setModelName (line 283) | public void setModelName(String modelName) {
      method getModelPath (line 287) | public String getModelPath() {
      method setModelPath (line 291) | public void setModelPath(String modelPath) {
      method getHandler (line 295) | public String getHandler() {
      method setHandler (line 299) | public void setHandler(String handler) {
      method getOutputFile (line 303) | public String getOutputFile() {
      method setOutputFile (line 307) | public void setOutputFile(String outputFile) {
      method getRuntime (line 311) | public String getRuntime() {
      method setRuntime (line 315) | public void setRuntime(String runtime) {

FILE: frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/ModelArchiveTest.java
  class ModelArchiveTest (line 22) | public class ModelArchiveTest {
    method beforeTest (line 26) | @BeforeTest
    method test (line 36) | @Test

FILE: frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/test/TestHelper.java
  class TestHelper (line 29) | public final class TestHelper {
    method TestHelper (line 31) | private TestHelper() {}
    method testGetterSetters (line 33) | public static void testGetterSetters(Class<?> baseClass)
    method getClasses (line 75) | private static List<Class<?>> getClasses(Class<?> clazz)
    method getMockValue (line 117) | private static Object getMockValue(Class<?> type) {

FILE: frontend/modelarchive/src/test/resources/models/custom-return-code/service.py
  function handle (line 13) | def handle(data, ctx):

FILE: frontend/modelarchive/src/test/resources/models/error_batch/service.py
  function handle (line 16) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/init-error/invalid_service.py
  function handle (line 15) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/invalid/invalid_service.py
  function handle (line 15) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/loading-memory-error/service.py
  function handle (line 12) | def handle(ctx, data):

FILE: frontend/modelarchive/src/test/resources/models/logging/service.py
  class LoggingService (line 18) | class LoggingService(object):
    method __init__ (line 25) | def __init__(self):
    method __del__ (line 30) | def __del__(self):
    method initialize (line 33) | def initialize(self, context):
    method inference (line 44) | def inference(model_input):
    method handle (line 55) | def handle(self, data, context):
  function handle (line 81) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/noop-no-manifest/service.py
  class NoopService (line 18) | class NoopService(object):
    method __init__ (line 25) | def __init__(self):
    method initialize (line 29) | def initialize(self, context):
    method preprocess (line 40) | def preprocess(data):
    method inference (line 50) | def inference(model_input):
    method postprocess (line 60) | def postprocess(model_output):
    method handle (line 63) | def handle(self, data, context):
  function handle (line 90) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/noop-v0.1/noop_service.py
  class NoopService (line 18) | class NoopService(SingleNodeService):
    method _inference (line 23) | def _inference(self, data):
    method ping (line 26) | def ping(self):

FILE: frontend/modelarchive/src/test/resources/models/noop-v1.0-config-tests/service.py
  class NoopService (line 18) | class NoopService(object):
    method __init__ (line 25) | def __init__(self):
    method initialize (line 29) | def initialize(self, context):
    method preprocess (line 40) | def preprocess(data):
    method inference (line 50) | def inference(model_input):
    method postprocess (line 60) | def postprocess(model_output):
    method handle (line 63) | def handle(self, data, context):
  function handle (line 112) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/noop-v1.0/service.py
  class NoopService (line 18) | class NoopService(object):
    method __init__ (line 25) | def __init__(self):
    method initialize (line 29) | def initialize(self, context):
    method preprocess (line 40) | def preprocess(data):
    method inference (line 50) | def inference(model_input):
    method postprocess (line 60) | def postprocess(model_output):
    method handle (line 63) | def handle(self, data, context):
  function handle (line 112) | def handle(data, context):

FILE: frontend/modelarchive/src/test/resources/models/prediction-memory-error/service.py
  function handle (line 12) | def handle(data, ctx):

FILE: frontend/modelarchive/src/test/resources/models/respheader-test/service.py
  class NoopService (line 18) | class NoopService(object):
    method __init__ (line 25) | def __init__(self):
    method initialize (line 29) | def initialize(self, context):
    method preprocess (line 40) | def preprocess(data):
    method inference (line 50) | def inference(model_input):
    method postprocess (line 60) | def postprocess(model_output):
    method handle (line 63) | def handle(self, data, context):
  function handle (line 102) | def handle(data, context):

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/ModelServer.java
  class ModelServer (line 59) | public class ModelServer {
    method ModelServer (line 70) | public ModelServer(ConfigManager configManager) {
    method main (line 75) | public static void main(String[] args) {
    method startAndWait (line 101) | public void startAndWait() throws InterruptedException, IOException, G...
    method getDefaultModelName (line 116) | private String getDefaultModelName(String name) {
    method initModelStore (line 125) | private void initModelStore() {
    method exitModelStore (line 234) | private void exitModelStore() {
    method initializeServer (line 240) | public ChannelFuture initializeServer(
    method start (line 307) | public List<ChannelFuture> start()
    method validEndpoint (line 350) | private boolean validEndpoint(Annotation a, EndpointTypes type) {
    method registerEndpoints (line 356) | private HashMap<String, ModelServerEndpoint> registerEndpoints(Endpoin...
    method isRunning (line 371) | public boolean isRunning() {
    method stop (line 375) | public void stop() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/ServerInitializer.java
  class ServerInitializer (line 36) | public class ServerInitializer extends ChannelInitializer<Channel> {
    method ServerInitializer (line 47) | public ServerInitializer(SslContext sslCtx, ConnectorType type) {
    method initChannel (line 53) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ApiDescriptionRequestHandler.java
  class ApiDescriptionRequestHandler (line 12) | public class ApiDescriptionRequestHandler extends HttpRequestHandlerChain {
    method ApiDescriptionRequestHandler (line 16) | public ApiDescriptionRequestHandler(ConnectorType type) {
    method handleRequest (line 20) | @Override
    method isApiDescription (line 41) | private boolean isApiDescription(String[] segments) {
    method handleApiDescription (line 45) | private void handleApiDescription(ChannelHandlerContext ctx) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/BadRequestException.java
  class BadRequestException (line 15) | public class BadRequestException extends IllegalArgumentException {
    method BadRequestException (line 25) | public BadRequestException(String message) {
    method BadRequestException (line 39) | public BadRequestException(Throwable cause) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ConflictStatusException.java
  class ConflictStatusException (line 15) | public class ConflictStatusException extends IllegalArgumentException {
    method ConflictStatusException (line 25) | public ConflictStatusException(String message) {
    method ConflictStatusException (line 39) | public ConflictStatusException(Throwable cause) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/DescribeModelResponse.java
  class DescribeModelResponse (line 19) | public class DescribeModelResponse {
    method DescribeModelResponse (line 36) | public DescribeModelResponse() {
    method getModelName (line 40) | public String getModelName() {
    method setModelName (line 44) | public void setModelName(String modelName) {
    method getLoadedAtStartup (line 48) | public boolean getLoadedAtStartup() {
    method setLoadedAtStartup (line 52) | public void setLoadedAtStartup(boolean loadedAtStartup) {
    method getModelVersion (line 56) | public String getModelVersion() {
    method setModelVersion (line 60) | public void setModelVersion(String modelVersion) {
    method getModelUrl (line 64) | public String getModelUrl() {
    method setModelUrl (line 68) | public void setModelUrl(String modelUrl) {
    method getEngine (line 72) | public String getEngine() {
    method setEngine (line 76) | public void setEngine(String engine) {
    method getRuntime (line 80) | public String getRuntime() {
    method setRuntime (line 84) | public void setRuntime(String runtime) {
    method getMinWorkers (line 88) | public int getMinWorkers() {
    method setMinWorkers (line 92) | public void setMinWorkers(int minWorkers) {
    method getMaxWorkers (line 96) | public int getMaxWorkers() {
    method setMaxWorkers (line 100) | public void setMaxWorkers(int maxWorkers) {
    method getBatchSize (line 104) | public int getBatchSize() {
    method setBatchSize (line 108) | public void setBatchSize(int batchSize) {
    method getMaxBatchDelay (line 112) | public int getMaxBatchDelay() {
    method setMaxBatchDelay (line 116) | public void setMaxBatchDelay(int maxBatchDelay) {
    method getStatus (line 120) | public String getStatus() {
    method setStatus (line 124) | public void setStatus(String status) {
    method getWorkers (line 128) | public List<Worker> getWorkers() {
    method setWorkers (line 132) | public void setWorkers(List<Worker> workers) {
    method addWorker (line 136) | public void addWorker(
    method getMetrics (line 147) | public Metrics getMetrics() {
    method setMetrics (line 151) | public void setMetrics(Metrics metrics) {
    class Worker (line 155) | public static final class Worker {
      method Worker (line 163) | public Worker() {}
      method getId (line 165) | public String getId() {
      method setId (line 169) | public void setId(String id) {
      method getStartTime (line 173) | public Date getStartTime() {
      method setStartTime (line 177) | public void setStartTime(Date startTime) {
      method getStatus (line 181) | public String getStatus() {
      method setStatus (line 185) | public void setStatus(String status) {
      method isGpu (line 189) | public boolean isGpu() {
      method setGpu (line 193) | public void setGpu(boolean gpu) {
      method getMemoryUsage (line 197) | public long getMemoryUsage() {
      method setMemoryUsage (line 201) | public void setMemoryUsage(long memoryUsage) {
    class Metrics (line 206) | public static final class Metrics {
      method getRejectedRequests (line 212) | public int getRejectedRequests() {
      method setRejectedRequests (line 216) | public void setRejectedRequests(int rejectedRequests) {
      method getWaitingQueueSize (line 220) | public int getWaitingQueueSize() {
      method setWaitingQueueSize (line 224) | public void setWaitingQueueSize(int waitingQueueSize) {
      method getRequests (line 228) | public int getRequests() {
      method setRequests (line 232) | public void setRequests(int requests) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ErrorResponse.java
  class ErrorResponse (line 15) | public class ErrorResponse {
    method ErrorResponse (line 21) | public ErrorResponse() {}
    method ErrorResponse (line 23) | public ErrorResponse(int code, String message) {
    method ErrorResponse (line 28) | public ErrorResponse(int code, String type, String message) {
    method getCode (line 34) | public int getCode() {
    method getType (line 38) | public String getType() {
    method getMessage (line 42) | public String getMessage() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/HttpRequestHandler.java
  class HttpRequestHandler (line 31) | public class HttpRequestHandler extends SimpleChannelInboundHandler<Full...
    method HttpRequestHandler (line 36) | public HttpRequestHandler() {}
    method HttpRequestHandler (line 38) | public HttpRequestHandler(HttpRequestHandlerChain chain) {
    method channelRead0 (line 43) | @Override
    method exceptionCaught (line 83) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/HttpRequestHandlerChain.java
  class HttpRequestHandlerChain (line 26) | public abstract class HttpRequestHandlerChain {
    method HttpRequestHandlerChain (line 31) | public HttpRequestHandlerChain() {}
    method HttpRequestHandlerChain (line 33) | public HttpRequestHandlerChain(Map<String, ModelServerEndpoint> map) {
    method setNextHandler (line 37) | public HttpRequestHandlerChain setNextHandler(HttpRequestHandlerChain ...
    method handleRequest (line 42) | protected abstract void handleRequest(
    method run (line 49) | private void run(
    method handleCustomEndpoint (line 86) | protected void handleCustomEndpoint(

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/InferenceRequestHandler.java
  class InferenceRequestHandler (line 45) | public class InferenceRequestHandler extends HttpRequestHandlerChain {
    method InferenceRequestHandler (line 50) | public InferenceRequestHandler(Map<String, ModelServerEndpoint> ep) {
    method handleRequest (line 54) | @Override
    method isInferenceReq (line 87) | private boolean isInferenceReq(String[] segments) {
    method validatePredictionsEndpoint (line 98) | private void validatePredictionsEndpoint(String[] segments) {
    method handlePredictions (line 110) | private void handlePredictions(
    method handleInvocations (line 119) | private void handleInvocations(
    method handleLegacyPredict (line 137) | private void handleLegacyPredict(
    method predict (line 150) | private void predict(
    method parseRequest (line 182) | private static RequestInput parseRequest(

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/InternalServerException.java
  class InternalServerException (line 15) | public class InternalServerException extends RuntimeException {
    method InternalServerException (line 25) | public InternalServerException(String message) {
    method InternalServerException (line 41) | public InternalServerException(String message, Throwable cause) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/InvalidPluginException.java
  class InvalidPluginException (line 16) | public class InvalidPluginException extends RuntimeException {
    method InvalidPluginException (line 22) | public InvalidPluginException() {
    method InvalidPluginException (line 31) | public InvalidPluginException(String msg) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/InvalidRequestHandler.java
  class InvalidRequestHandler (line 8) | public class InvalidRequestHandler extends HttpRequestHandlerChain {
    method InvalidRequestHandler (line 9) | public InvalidRequestHandler() {}
    method handleRequest (line 11) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ListModelsResponse.java
  class ListModelsResponse (line 18) | public class ListModelsResponse {
    method ListModelsResponse (line 23) | public ListModelsResponse() {
    method getNextPageToken (line 27) | public String getNextPageToken() {
    method setNextPageToken (line 31) | public void setNextPageToken(String nextPageToken) {
    method getModels (line 35) | public List<ModelItem> getModels() {
    method setModels (line 39) | public void setModels(List<ModelItem> models) {
    method addModel (line 43) | public void addModel(String modelName, String modelUrl) {
    class ModelItem (line 47) | public static final class ModelItem {
      method ModelItem (line 52) | public ModelItem() {}
      method ModelItem (line 54) | public ModelItem(String modelName, String modelUrl) {
      method getModelName (line 59) | public String getModelName() {
      method setModelName (line 63) | public void setModelName(String modelName) {
      method getModelUrl (line 67) | public String getModelUrl() {
      method setModelUrl (line 71) | public void setModelUrl(String modelUrl) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ManagementRequestHandler.java
  class ManagementRequestHandler (line 50) | public class ManagementRequestHandler extends HttpRequestHandlerChain {
    method ManagementRequestHandler (line 53) | public ManagementRequestHandler(Map<String, ModelServerEndpoint> ep) {
    method handleRequest (line 57) | @Override
    method isManagementReq (line 99) | private boolean isManagementReq(String[] segments) {
    method handleListModels (line 105) | private void handleListModels(ChannelHandlerContext ctx, QueryStringDe...
    method handleDescribeModel (line 138) | private void handleDescribeModel(ChannelHandlerContext ctx, String mod...
    method handleRegisterModel (line 175) | private void handleRegisterModel(
    method handleUnregisterModel (line 247) | private void handleUnregisterModel(ChannelHandlerContext ctx, String m...
    method handleScaleModel (line 262) | private void handleScaleModel(
    method updateModelWorkers (line 280) | private void updateModelWorkers(
    method parseRequest (line 332) | private RegisterModelRequest parseRequest(FullHttpRequest req, QuerySt...

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/MethodNotAllowedException.java
  class MethodNotAllowedException (line 15) | public class MethodNotAllowedException extends RuntimeException {
    method MethodNotAllowedException (line 23) | public MethodNotAllowedException() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/RequestTimeoutException.java
  class RequestTimeoutException (line 15) | public class RequestTimeoutException extends RuntimeException {
    method RequestTimeoutException (line 25) | public RequestTimeoutException(String message) {
    method RequestTimeoutException (line 41) | public RequestTimeoutException(String message, Throwable cause) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ResourceNotFoundException.java
  class ResourceNotFoundException (line 15) | public class ResourceNotFoundException extends RuntimeException {
    method ResourceNotFoundException (line 23) | public ResourceNotFoundException() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/ServiceUnavailableException.java
  class ServiceUnavailableException (line 3) | public class ServiceUnavailableException extends RuntimeException {
    method ServiceUnavailableException (line 13) | public ServiceUnavailableException(String message) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/Session.java
  class Session (line 18) | public class Session {
    method Session (line 28) | public Session(String remoteIp, HttpRequest request) {
    method getRequestId (line 42) | public String getRequestId() {
    method setCode (line 46) | public void setCode(int code) {
    method toString (line 50) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/StatusResponse.java
  class StatusResponse (line 15) | public class StatusResponse {
    method StatusResponse (line 19) | public StatusResponse() {}
    method StatusResponse (line 21) | public StatusResponse(String status) {
    method getStatus (line 25) | public String getStatus() {
    method setStatus (line 29) | public void setStatus(String status) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/http/messages/RegisterModelRequest.java
  class RegisterModelRequest (line 21) | public class RegisterModelRequest {
    method RegisterModelRequest (line 52) | public RegisterModelRequest(QueryStringDecoder decoder) {
    method RegisterModelRequest (line 77) | public RegisterModelRequest() {
    method getModelName (line 86) | public String getModelName() {
    method getRuntime (line 90) | public String getRuntime() {
    method getHandler (line 94) | public String getHandler() {
    method getBatchSize (line 98) | public Integer getBatchSize() {
    method getMaxBatchDelay (line 102) | public Integer getMaxBatchDelay() {
    method getInitialWorkers (line 106) | public Integer getInitialWorkers() {
    method isSynchronous (line 110) | public Boolean isSynchronous() {
    method getResponseTimeoutSeconds (line 114) | public Integer getResponseTimeoutSeconds() {
    method getModelUrl (line 118) | public String getModelUrl() {
    method getPreloadModel (line 122) | public String getPreloadModel() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/Dimension.java
  class Dimension (line 17) | public class Dimension {
    method Dimension (line 25) | public Dimension() {}
    method Dimension (line 27) | public Dimension(String name, String value) {
    method getName (line 32) | public String getName() {
    method setName (line 36) | public void setName(String name) {
    method getValue (line 40) | public String getValue() {
    method setValue (line 44) | public void setValue(String value) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/Metric.java
  class Metric (line 22) | public class Metric {
    method Metric (line 49) | public Metric() {}
    method Metric (line 51) | public Metric(
    method getHostName (line 64) | public String getHostName() {
    method setHostName (line 68) | public void setHostName(String hostName) {
    method getRequestId (line 72) | public String getRequestId() {
    method setRequestId (line 76) | public void setRequestId(String requestId) {
    method getMetricName (line 80) | public String getMetricName() {
    method setMetricName (line 84) | public void setMetricName(String metricName) {
    method getValue (line 88) | public String getValue() {
    method setValue (line 92) | public void setValue(String value) {
    method getUnit (line 96) | public String getUnit() {
    method setUnit (line 100) | public void setUnit(String unit) {
    method getDimensions (line 104) | public List<Dimension> getDimensions() {
    method setDimensions (line 108) | public void setDimensions(List<Dimension> dimensions) {
    method getTimestamp (line 112) | public String getTimestamp() {
    method setTimestamp (line 116) | public void setTimestamp(String timestamp) {
    method parse (line 120) | public static Metric parse(String line) {
    method toString (line 151) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/MetricCollector.java
  class MetricCollector (line 31) | public class MetricCollector implements Runnable {
    method MetricCollector (line 38) | public MetricCollector(ConfigManager configManager) {
    method run (line 42) | @Override
    method writeWorkerPids (line 135) | private void writeWorkerPids(Map<Integer, WorkerThread> workerMap, Out...

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/MetricManager.java
  class MetricManager (line 21) | public final class MetricManager {
    method MetricManager (line 26) | private MetricManager() {
    method getInstance (line 30) | public static MetricManager getInstance() {
    method scheduleMetrics (line 34) | public static void scheduleMetrics(ConfigManager configManager) {
    method getMetrics (line 45) | public synchronized List<Metric> getMetrics() {
    method setMetrics (line 49) | public synchronized void setMetrics(List<Metric> metrics) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Encoding.java
  class Encoding (line 15) | public class Encoding {
    method Encoding (line 22) | public Encoding() {}
    method Encoding (line 24) | public Encoding(String contentType) {
    method getContentType (line 28) | public String getContentType() {
    method setContentType (line 32) | public void setContentType(String contentType) {
    method isAllowReserved (line 36) | public boolean isAllowReserved() {
    method setAllowReserved (line 40) | public void setAllowReserved(boolean allowReserved) {
    method getStyle (line 44) | public String getStyle() {
    method setStyle (line 48) | public void setStyle(String style) {
    method isExplode (line 52) | public boolean isExplode() {
    method setExplode (line 56) | public void setExplode(boolean explode) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Info.java
  class Info (line 15) | public class Info {
    method Info (line 22) | public Info() {}
    method getTitle (line 24) | public String getTitle() {
    method setTitle (line 28) | public void setTitle(String title) {
    method getDescription (line 32) | public String getDescription() {
    method setDescription (line 36) | public void setDescription(String description) {
    method getTermsOfService (line 40) | public String getTermsOfService() {
    method setTermsOfService (line 44) | public void setTermsOfService(String termsOfService) {
    method getVersion (line 48) | public String getVersion() {
    method setVersion (line 52) | public void setVersion(String version) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/MediaType.java
  class MediaType (line 18) | public class MediaType {
    method MediaType (line 24) | public MediaType() {}
    method MediaType (line 26) | public MediaType(String contentType, Schema schema) {
    method getContentType (line 31) | public String getContentType() {
    method setContentType (line 35) | public void setContentType(String contentType) {
    method getSchema (line 39) | public Schema getSchema() {
    method setSchema (line 43) | public void setSchema(Schema schema) {
    method getEncoding (line 47) | public Map<String, Encoding> getEncoding() {
    method setEncoding (line 51) | public void setEncoding(Map<String, Encoding> encoding) {
    method addEncoding (line 55) | public void addEncoding(String contentType, Encoding encoding) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/OpenApi.java
  class OpenApi (line 18) | public class OpenApi {
    method OpenApi (line 24) | public OpenApi() {}
    method getOpenapi (line 26) | public String getOpenapi() {
    method setOpenapi (line 30) | public void setOpenapi(String openapi) {
    method getInfo (line 34) | public Info getInfo() {
    method setInfo (line 38) | public void setInfo(Info info) {
    method getPaths (line 42) | public Map<String, Path> getPaths() {
    method setPaths (line 46) | public void setPaths(Map<String, Path> paths) {
    method addPath (line 50) | public void addPath(String url, Path path) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/OpenApiUtils.java
  class OpenApiUtils (line 23) | public final class OpenApiUtils {
    method OpenApiUtils (line 25) | private OpenApiUtils() {}
    method listApis (line 27) | public static String listApis(ConnectorType type) {
    method listInferenceApis (line 45) | static void listInferenceApis(OpenApi openApi) {
    method listManagementApis (line 55) | static void listManagementApis(OpenApi openApi) {
    method getModelApi (line 61) | public static String getModelApi(Model model) {
    method getApiDescriptionPath (line 74) | private static Path getApiDescriptionPath(boolean legacy) {
    method getPingPath (line 95) | private static Path getPingPath() {
    method getInvocationsPath (line 110) | private static Path getInvocationsPath() {
    method getPredictionsPath (line 144) | private static Path getPredictionsPath() {
    method getLegacyPredictPath (line 193) | private static Path getLegacyPredictPath() {
    method getModelsPath (line 225) | private static Path getModelsPath() {
    method getModelManagerPath (line 232) | private static Path getModelManagerPath() {
    method getListModelsOperation (line 240) | private static Operation getListModelsOperation() {
    method getRegisterOperation (line 287) | private static Operation getRegisterOperation() {
    method getUnRegisterOperation (line 368) | private static Operation getUnRegisterOperation() {
    method getDescribeModelOperation (line 403) | private static Operation getDescribeModelOperation() {
    method getScaleOperation (line 465) | private static Operation getScaleOperation() {
    method getModelPath (line 509) | private static Path getModelPath(String modelName) {
    method getErrorResponse (line 520) | private static MediaType getErrorResponse() {
    method getStatusResponse (line 529) | private static MediaType getStatusResponse() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Operation.java
  class Operation (line 20) | public class Operation {
    method Operation (line 30) | public Operation() {}
    method Operation (line 32) | public Operation(String operationId) {
    method Operation (line 36) | public Operation(String operationId, String description) {
    method getSummary (line 41) | public String getSummary() {
    method setSummary (line 45) | public void setSummary(String summary) {
    method getDescription (line 49) | public String getDescription() {
    method setDescription (line 53) | public void setDescription(String description) {
    method getOperationId (line 57) | public String getOperationId() {
    method setOperationId (line 61) | public void setOperationId(String operationId) {
    method getParameters (line 65) | public List<Parameter> getParameters() {
    method setParameters (line 69) | public void setParameters(List<Parameter> parameters) {
    method addParameter (line 73) | public void addParameter(Parameter parameter) {
    method getRequestBody (line 80) | public RequestBody getRequestBody() {
    method setRequestBody (line 84) | public void setRequestBody(RequestBody requestBody) {
    method getResponses (line 88) | public Map<String, Response> getResponses() {
    method setResponses (line 92) | public void setResponses(Map<String, Response> responses) {
    method addResponse (line 96) | public void addResponse(Response response) {
    method getDeprecated (line 103) | public Boolean getDeprecated() {
    method setDeprecated (line 107) | public void setDeprecated(Boolean deprecated) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Parameter.java
  class Parameter (line 15) | @SuppressWarnings("PMD.AbstractClassWithoutAbstractMethod")
    method setType (line 29) | public void setType(String type) {
    method getType (line 33) | public String getType() {
    method getName (line 37) | public String getName() {
    method setName (line 41) | public void setName(String name) {
    method getIn (line 45) | public String getIn() {
    method setIn (line 49) | public void setIn(String in) {
    method getDescription (line 53) | public String getDescription() {
    method setDescription (line 57) | public void setDescription(String description) {
    method isRequired (line 61) | public boolean isRequired() {
    method setRequired (line 65) | public void setRequired(boolean required) {
    method getDeprecated (line 69) | public Boolean getDeprecated() {
    method setDeprecated (line 73) | public void setDeprecated(Boolean deprecated) {
    method getAllowEmptyValue (line 77) | public Boolean getAllowEmptyValue() {
    method setAllowEmptyValue (line 81) | public void setAllowEmptyValue(Boolean allowEmptyValue) {
    method getStyle (line 85) | public String getStyle() {
    method setStyle (line 89) | public void setStyle(String style) {
    method getExplode (line 93) | public Boolean getExplode() {
    method setExplode (line 97) | public void setExplode(Boolean explode) {
    method getSchema (line 101) | public Schema getSchema() {
    method setSchema (line 105) | public void setSchema(Schema schema) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Path.java
  class Path (line 17) | public class Path {
    method getGet (line 28) | public Operation getGet() {
    method setGet (line 32) | public void setGet(Operation get) {
    method getPut (line 36) | public Operation getPut() {
    method setPut (line 40) | public void setPut(Operation put) {
    method getPost (line 44) | public Operation getPost() {
    method setPost (line 48) | public void setPost(Operation post) {
    method getHead (line 52) | public Operation getHead() {
    method setHead (line 56) | public void setHead(Operation head) {
    method getDelete (line 60) | public Operation getDelete() {
    method setDelete (line 64) | public void setDelete(Operation delete) {
    method getPatch (line 68) | public Operation getPatch() {
    method setPatch (line 72) | public void setPatch(Operation patch) {
    method getOptions (line 76) | public Operation getOptions() {
    method setOptions (line 80) | public void setOptions(Operation options) {
    method getParameters (line 84) | public List<Parameter> getParameters() {
    method setParameters (line 88) | public void setParameters(List<Parameter> parameters) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/PathParameter.java
  class PathParameter (line 15) | public class PathParameter extends Parameter {
    method PathParameter (line 17) | public PathParameter() {
    method PathParameter (line 21) | public PathParameter(String name, String description) {
    method PathParameter (line 25) | public PathParameter(String name, String type, String defaultValue, St...

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/QueryParameter.java
  class QueryParameter (line 15) | public class QueryParameter extends Parameter {
    method QueryParameter (line 17) | public QueryParameter() {
    method QueryParameter (line 21) | public QueryParameter(String name, String description) {
    method QueryParameter (line 25) | public QueryParameter(String name, String type, String description) {
    method QueryParameter (line 29) | public QueryParameter(String name, String type, String defaultValue, S...
    method QueryParameter (line 33) | public QueryParameter(

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/RequestBody.java
  class RequestBody (line 18) | public class RequestBody {
    method RequestBody (line 24) | public RequestBody() {}
    method getDescription (line 26) | public String getDescription() {
    method setDescription (line 30) | public void setDescription(String description) {
    method getContent (line 34) | public Map<String, MediaType> getContent() {
    method setContent (line 38) | public void setContent(Map<String, MediaType> content) {
    method addContent (line 42) | public void addContent(MediaType mediaType) {
    method isRequired (line 49) | public boolean isRequired() {
    method setRequired (line 53) | public void setRequired(boolean required) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Response.java
  class Response (line 18) | public class Response {
    method Response (line 24) | public Response() {}
    method Response (line 26) | public Response(String code, String description) {
    method Response (line 31) | public Response(String code, String description, MediaType mediaType) {
    method getCode (line 38) | public String getCode() {
    method getDescription (line 42) | public String getDescription() {
    method setDescription (line 46) | public void setDescription(String description) {
    method getContent (line 50) | public Map<String, MediaType> getContent() {
    method setContent (line 54) | public void setContent(Map<String, MediaType> content) {
    method addContent (line 58) | public void addContent(MediaType mediaType) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Schema.java
  class Schema (line 21) | public class Schema {
    method Schema (line 40) | public Schema() {}
    method Schema (line 42) | public Schema(String type) {
    method Schema (line 46) | public Schema(String type, String description) {
    method Schema (line 50) | public Schema(String type, String description, String defaultValue) {
    method getType (line 56) | public String getType() {
    method setType (line 60) | public void setType(String type) {
    method getFormat (line 64) | public String getFormat() {
    method setFormat (line 68) | public void setFormat(String format) {
    method getName (line 72) | public String getName() {
    method setName (line 76) | public void setName(String name) {
    method getRequired (line 80) | public List<String> getRequired() {
    method setRequired (line 84) | public void setRequired(List<String> required) {
    method getProperties (line 88) | public Map<String, Schema> getProperties() {
    method setProperties (line 92) | public void setProperties(Map<String, Schema> properties) {
    method addProperty (line 96) | public void addProperty(String key, Schema schema, boolean requiredPro...
    method getItems (line 109) | public Schema getItems() {
    method setItems (line 113) | public void setItems(Schema items) {
    method getDescription (line 117) | public String getDescription() {
    method setDescription (line 121) | public void setDescription(String description) {
    method getExample (line 125) | public Object getExample() {
    method setExample (line 129) | public void setExample(Object example) {
    method getAdditionalProperties (line 133) | public Schema getAdditionalProperties() {
    method setAdditionalProperties (line 137) | public void setAdditionalProperties(Schema additionalProperties) {
    method getDiscriminator (line 141) | public String getDiscriminator() {
    method setDiscriminator (line 145) | public void setDiscriminator(String discriminator) {
    method getEnumeration (line 149) | public List<String> getEnumeration() {
    method setEnumeration (line 153) | public void setEnumeration(List<String> enumeration) {
    method getDefaultValue (line 157) | public String getDefaultValue() {
    method setDefaultValue (line 161) | public void setDefaultValue(String defaultValue) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/ModelServerContext.java
  class ModelServerContext (line 24) | public class ModelServerContext implements Context {
    method getConfig (line 25) | @Override
    method getModels (line 30) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/ModelServerModel.java
  class ModelServerModel (line 22) | public class ModelServerModel implements Model {
    method ModelServerModel (line 25) | public ModelServerModel(com.amazonaws.ml.mms.wlm.Model m) {
    method getModelName (line 29) | @Override
    method getModelUrl (line 34) | @Override
    method getModelHandler (line 39) | @Override
    method getModelWorkers (line 44) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/ModelServerRequest.java
  class ModelServerRequest (line 25) | public class ModelServerRequest implements Request {
    method ModelServerRequest (line 29) | public ModelServerRequest(FullHttpRequest r, QueryStringDecoder d) {
    method getHeaderNames (line 34) | @Override
    method getRequestURI (line 39) | @Override
    method getParameterMap (line 44) | @Override
    method getParameter (line 49) | @Override
    method getContentType (line 54) | @Override
    method getInputStream (line 59) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/ModelServerResponse.java
  class ModelServerResponse (line 23) | public class ModelServerResponse implements Response {
    method ModelServerResponse (line 27) | public ModelServerResponse(FullHttpResponse rsp) {
    method setStatus (line 31) | @Override
    method setStatus (line 36) | @Override
    method setHeader (line 41) | @Override
    method addHeader (line 46) | @Override
    method setContentType (line 51) | @Override
    method getOutputStream (line 56) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/ModelWorker.java
  class ModelWorker (line 20) | public class ModelWorker implements Worker {
    method ModelWorker (line 24) | public ModelWorker(WorkerThread t) {
    method isRunning (line 29) | @Override
    method getWorkerMemory (line 34) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/servingsdk/impl/PluginsManager.java
  class PluginsManager (line 27) | public final class PluginsManager {
    method PluginsManager (line 35) | private PluginsManager() {}
    method getInstance (line 37) | public static PluginsManager getInstance() {
    method initialize (line 41) | public void initialize() {
    method validateEndpointPlugin (line 46) | private boolean validateEndpointPlugin(Annotation a, EndpointTypes typ...
    method getEndpoints (line 52) | private HashMap<String, ModelServerEndpoint> getEndpoints(EndpointType...
    method initInferenceEndpoints (line 76) | private HashMap<String, ModelServerEndpoint> initInferenceEndpoints() {
    method initManagementEndpoints (line 80) | private HashMap<String, ModelServerEndpoint> initManagementEndpoints() {
    method getInferenceEndpoints (line 84) | public Map<String, ModelServerEndpoint> getInferenceEndpoints() {
    method getManagementEndpoints (line 88) | public Map<String, ModelServerEndpoint> getManagementEndpoints() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/ConfigManager.java
  class ConfigManager (line 53) | public final class ConfigManager {
    method ConfigManager (line 111) | private ConfigManager(Arguments args) {
    method resolveEnvVarVals (line 172) | private void resolveEnvVarVals(Properties prop) {
    method setSystemVars (line 193) | private void setSystemVars() {
    method getEnableEnvVarsConfig (line 210) | String getEnableEnvVarsConfig() {
    method getHostName (line 214) | public String getHostName() {
    method init (line 218) | public static void init(Arguments args) {
    method getInstance (line 222) | public static ConfigManager getInstance() {
    method isDebug (line 226) | public boolean isDebug() {
    method getListener (line 231) | public Connector getListener(boolean management) {
    method getPreloadModel (line 241) | public String getPreloadModel() {
    method getPreferDirectBuffer (line 245) | public boolean getPreferDirectBuffer() {
    method getNettyThreads (line 249) | public int getNettyThreads() {
    method getNettyClientThreads (line 253) | public int getNettyClientThreads() {
    method getJobQueueSize (line 257) | public int getJobQueueSize() {
    method getNumberOfGpu (line 261) | public int getNumberOfGpu() {
    method getMmsDefaultServiceHandler (line 265) | public String getMmsDefaultServiceHandler() {
    method getConfiguration (line 269) | public Properties getConfiguration() {
    method getConfiguredDefaultWorkersPerModel (line 273) | public int getConfiguredDefaultWorkersPerModel() {
    method getDefaultWorkers (line 277) | public int getDefaultWorkers() {
    method getMetricTimeInterval (line 297) | public int getMetricTimeInterval() {
    method getModelServerHome (line 301) | public String getModelServerHome() {
    method getPythonExecutable (line 322) | public String getPythonExecutable() {
    method getModelStore (line 326) | public String getModelStore() {
    method getLoadModels (line 330) | public String getLoadModels() {
    method getBlacklistPattern (line 334) | public Pattern getBlacklistPattern() {
    method getCorsAllowedOrigin (line 338) | public String getCorsAllowedOrigin() {
    method getCorsAllowedMethods (line 342) | public String getCorsAllowedMethods() {
    method getCorsAllowedHeaders (line 346) | public String getCorsAllowedHeaders() {
    method getSslContext (line 350) | public SslContext getSslContext() throws IOException, GeneralSecurityE...
    method loadPrivateKey (line 405) | private PrivateKey loadPrivateKey(String keyFile) throws IOException, ...
    method loadCertificateChain (line 423) | private X509Certificate[] loadCertificateChain(String keyFile)
    method getProperty (line 437) | public String getProperty(String key, String def) {
    method validateConfigurations (line 441) | public void validateConfigurations() throws InvalidPropertiesFormatExc...
    method dumpConfigurations (line 450) | public String dumpConfigurations() {
    method useNativeIo (line 498) | public boolean useNativeIo() {
    method getIoRatio (line 502) | public int getIoRatio() {
    method getMaxResponseSize (line 506) | public int getMaxResponseSize() {
    method getMaxRequestSize (line 510) | public int getMaxRequestSize() {
    method setProperty (line 514) | void setProperty(String key, String value) {
    method getIntProperty (line 518) | private int getIntProperty(String key, int def) {
    method getDefaultResponseTimeoutSeconds (line 526) | public int getDefaultResponseTimeoutSeconds() {
    method getUnregisterModelTimeout (line 542) | public int getUnregisterModelTimeout() {
    method findMmsHome (line 546) | private File findMmsHome() {
    method enableAsyncLogging (line 559) | private void enableAsyncLogging() {
    method getBackendConfiguration (line 565) | public HashMap<String, String> getBackendConfiguration() {
    method getCanonicalPath (line 573) | private static String getCanonicalPath(File file) {
    method getCanonicalPath (line 581) | private static String getCanonicalPath(String path) {
    method getAvailableGpu (line 588) | private static int getAvailableGpu() {
    class Arguments (line 606) | public static final class Arguments {
      method Arguments (line 613) | public Arguments() {}
      method Arguments (line 615) | public Arguments(CommandLine cmd) {
      method getOptions (line 622) | public static Options getOptions() {
      method getMmsConfigFile (line 655) | public String getMmsConfigFile() {
      method getPythonExecutable (line 659) | public String getPythonExecutable() {
      method setMmsConfigFile (line 663) | public void setMmsConfigFile(String mmsConfigFile) {
      method getModelStore (line 667) | public String getModelStore() {
      method setModelStore (line 671) | public void setModelStore(String modelStore) {
      method getModels (line 675) | public String[] getModels() {
      method setModels (line 679) | public void setModels(String[] models) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/Connector.java
  class Connector (line 42) | public class Connector {
    method Connector (line 58) | public Connector(int port) {
    method Connector (line 62) | private Connector(int port, boolean uds) {
    method Connector (line 74) | private Connector(
    method parse (line 89) | public static Connector parse(String binding, boolean management) {
    method getSocketType (line 126) | public String getSocketType() {
    method getSocketPath (line 130) | public String getSocketPath() {
    method isUds (line 134) | public boolean isUds() {
    method isSsl (line 138) | public boolean isSsl() {
    method isManagement (line 142) | public boolean isManagement() {
    method getSocketAddress (line 146) | public SocketAddress getSocketAddress() {
    method getPurpose (line 150) | public String getPurpose() {
    method newEventLoopGroup (line 154) | public static EventLoopGroup newEventLoopGroup(int threads) {
    method getServerChannel (line 166) | public Class<? extends ServerChannel> getServerChannel() {
    method getClientChannel (line 176) | public Class<? extends Channel> getClientChannel() {
    method clean (line 186) | public void clean() {
    method equals (line 192) | @Override
    method hashCode (line 207) | @Override
    method toString (line 212) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/ConnectorType.java
  type ConnectorType (line 3) | public enum ConnectorType {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/JsonUtils.java
  class JsonUtils (line 18) | public final class JsonUtils {
    method JsonUtils (line 27) | private JsonUtils() {}

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/NettyUtils.java
  class NettyUtils (line 49) | public final class NettyUtils {
    method NettyUtils (line 81) | private NettyUtils() {}
    method requestReceived (line 83) | public static void requestReceived(Channel channel, HttpRequest reques...
    method getRequestId (line 98) | public static String getRequestId(Channel channel) {
    method sendJsonResponse (line 106) | public static void sendJsonResponse(ChannelHandlerContext ctx, Object ...
    method sendJsonResponse (line 110) | public static void sendJsonResponse(
    method sendJsonResponse (line 115) | public static void sendJsonResponse(ChannelHandlerContext ctx, String ...
    method sendJsonResponse (line 119) | public static void sendJsonResponse(
    method sendError (line 135) | public static void sendError(
    method sendError (line 142) | public static void sendError(
    method sendHttpResponse (line 155) | public static void sendHttpResponse(
    method closeOnFlush (line 215) | public static void closeOnFlush(Channel ch) {
    method getBytes (line 221) | public static byte[] getBytes(ByteBuf buf) {
    method getParameter (line 232) | public static String getParameter(QueryStringDecoder decoder, String k...
    method getIntParameter (line 240) | public static int getIntParameter(QueryStringDecoder decoder, String k...
    method getFormData (line 252) | public static InputParameter getFormData(InterfaceHttpData data) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/OpenSslKey.java
  class OpenSslKey (line 16) | public final class OpenSslKey {
    method OpenSslKey (line 21) | private OpenSslKey() {}
    method convertPrivateKey (line 29) | public static byte[] convertPrivateKey(byte[] keySpec) {
    method encodeOID (line 56) | private static byte[] encodeOID(int[] oid) {
    method encodeOctetString (line 83) | private static byte[] encodeOctetString(byte[] bytes) {
    method encodeSequence (line 104) | private static byte[] encodeSequence(byte[][] byteArrays) {
    method writeLengthField (line 140) | private static int writeLengthField(byte[] bytes, int len) {
    method getLengthOfLengthField (line 156) | private static int getLengthOfLengthField(int len) {
    method getOIDCompLength (line 170) | private static int getOIDCompLength(int comp) {
    method writeOIDComp (line 184) | private static int writeOIDComp(int comp, byte[] bytes, int offset) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/ServerGroups.java
  class ServerGroups (line 28) | public class ServerGroups {
    method ServerGroups (line 40) | public ServerGroups(ConfigManager configManager) {
    method init (line 45) | public final void init() {
    method shutdown (line 53) | public void shutdown(boolean graceful) {
    method getServerGroup (line 80) | public EventLoopGroup getServerGroup() {
    method getChildGroup (line 84) | public EventLoopGroup getChildGroup() {
    method getBackendGroup (line 88) | public EventLoopGroup getBackendGroup() {
    method registerChannel (line 92) | public void registerChannel(Channel channel) {
    method closeAllChannels (line 96) | private void closeAllChannels(boolean graceful) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/codec/CodecUtils.java
  class CodecUtils (line 21) | public final class CodecUtils {
    method CodecUtils (line 26) | private CodecUtils() {}
    method readLength (line 28) | static int readLength(ByteBuf byteBuf, int maxLength) {
    method readString (line 44) | static String readString(ByteBuf byteBuf, int len) {
    method read (line 48) | static byte[] read(ByteBuf in, int len) {
    method readMap (line 58) | static Map<String, String> readMap(ByteBuf in, int len) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/codec/ModelRequestEncoder.java
  class ModelRequestEncoder (line 27) | @ChannelHandler.Sharable
    method ModelRequestEncoder (line 30) | public ModelRequestEncoder(boolean preferDirect) {
    method encode (line 34) | @Override
    method encodeRequest (line 73) | private void encodeRequest(RequestInput req, ByteBuf out) {
    method encodeParameter (line 90) | private void encodeParameter(InputParameter parameter, ByteBuf out) {
    method encodeField (line 102) | private static void encodeField(CharSequence field, ByteBuf out) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/codec/ModelResponseDecoder.java
  class ModelResponseDecoder (line 23) | public class ModelResponseDecoder extends ByteToMessageDecoder {
    method ModelResponseDecoder (line 27) | public ModelResponseDecoder(int maxBufferSize) {
    method decode (line 31) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/logging/QLogLayout.java
  class QLogLayout (line 25) | @Plugin(
    method QLogLayout (line 32) | public QLogLayout() {
    method toSerializable (line 89) | @Override
    method createLayout (line 153) | @PluginFactory
    method getStringOrDefault (line 158) | private static String getStringOrDefault(String val, String defVal) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/BaseModelRequest.java
  class BaseModelRequest (line 15) | public class BaseModelRequest {
    method BaseModelRequest (line 20) | public BaseModelRequest() {}
    method BaseModelRequest (line 22) | public BaseModelRequest(WorkerCommands command, String modelName) {
    method getCommand (line 27) | public WorkerCommands getCommand() {
    method getModelName (line 31) | public String getModelName() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/InputParameter.java
  class InputParameter (line 17) | public class InputParameter {
    method InputParameter (line 23) | public InputParameter() {}
    method InputParameter (line 25) | public InputParameter(String name, String value) {
    method InputParameter (line 30) | public InputParameter(String name, byte[] data) {
    method InputParameter (line 34) | public InputParameter(String name, byte[] data, CharSequence contentTy...
    method getName (line 40) | public String getName() {
    method getValue (line 44) | public byte[] getValue() {
    method getContentType (line 48) | public CharSequence getContentType() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/ModelInferenceRequest.java
  class ModelInferenceRequest (line 18) | public class ModelInferenceRequest extends BaseModelRequest {
    method ModelInferenceRequest (line 22) | public ModelInferenceRequest(String modelName) {
    method getRequestBatch (line 27) | public List<RequestInput> getRequestBatch() {
    method setRequestBatch (line 31) | public void setRequestBatch(List<RequestInput> requestBatch) {
    method addRequest (line 35) | public void addRequest(RequestInput req) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/ModelLoadModelRequest.java
  class ModelLoadModelRequest (line 17) | public class ModelLoadModelRequest extends BaseModelRequest {
    method ModelLoadModelRequest (line 30) | public ModelLoadModelRequest(Model model, int gpuId, String fd) {
    method getIoFileDescriptor (line 39) | public String getIoFileDescriptor() {
    method setIoFileDescriptor (line 43) | public void setIoFileDescriptor(String ioFileDescriptor) {
    method getModelPath (line 47) | public String getModelPath() {
    method getHandler (line 51) | public String getHandler() {
    method getBatchSize (line 55) | public int getBatchSize() {
    method getGpuId (line 59) | public int getGpuId() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/ModelWorkerResponse.java
  class ModelWorkerResponse (line 17) | public class ModelWorkerResponse {
    method ModelWorkerResponse (line 23) | public ModelWorkerResponse() {}
    method getCode (line 25) | public int getCode() {
    method setCode (line 29) | public void setCode(int code) {
    method getMessage (line 33) | public String getMessage() {
    method setMessage (line 37) | public void setMessage(String message) {
    method getPredictions (line 41) | public List<Predictions> getPredictions() {
    method setPredictions (line 45) | public void setPredictions(List<Predictions> predictions) {
    method appendPredictions (line 49) | public void appendPredictions(Predictions prediction) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/Predictions.java
  class Predictions (line 17) | public class Predictions {
    method getHeaders (line 27) | public Map<String, String> getHeaders() {
    method setHeaders (line 31) | public void setHeaders(Map<String, String> headers) {
    method Predictions (line 35) | public Predictions() {}
    method getRequestId (line 37) | public String getRequestId() {
    method setRequestId (line 41) | public void setRequestId(String requestId) {
    method getResp (line 45) | public byte[] getResp() {
    method setResp (line 49) | public void setResp(byte[] resp) {
    method getContentType (line 53) | public String getContentType() {
    method setStatusCode (line 57) | public void setStatusCode(int statusCode) {
    method setContentType (line 61) | public void setContentType(String contentType) {
    method getStatusCode (line 65) | public int getStatusCode() {
    method getReasonPhrase (line 69) | public String getReasonPhrase() {
    method setReasonPhrase (line 73) | public void setReasonPhrase(String reasonPhrase) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/RequestInput.java
  class RequestInput (line 21) | public class RequestInput {
    method RequestInput (line 27) | public RequestInput(String requestId) {
    method getRequestId (line 33) | public String getRequestId() {
    method setRequestId (line 37) | public void setRequestId(String requestId) {
    method getHeaders (line 41) | public Map<String, String> getHeaders() {
    method setHeaders (line 45) | public void setHeaders(Map<String, String> headers) {
    method updateHeaders (line 49) | public void updateHeaders(String key, String val) {
    method getParameters (line 53) | public List<InputParameter> getParameters() {
    method setParameters (line 57) | public void setParameters(List<InputParameter> parameters) {
    method addParameter (line 61) | public void addParameter(InputParameter modelInput) {
    method getStringParameter (line 65) | public String getStringParameter(String key) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/util/messages/WorkerCommands.java
  type WorkerCommands (line 5) | public enum WorkerCommands {
    method WorkerCommands (line 17) | WorkerCommands(String command) {
    method getCommand (line 21) | public String getCommand() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/BatchAggregator.java
  class BatchAggregator (line 27) | public class BatchAggregator {
    method BatchAggregator (line 34) | public BatchAggregator(Model model) {
    method getRequest (line 39) | public BaseModelRequest getRequest(String threadName, WorkerState state)
    method sendResponse (line 70) | public void sendResponse(ModelWorkerResponse message) {
    method sendError (line 106) | public void sendError(BaseModelRequest message, String error, HttpResp...

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/Job.java
  class Job (line 31) | public class Job {
    method Job (line 43) | public Job(
    method getJobId (line 54) | public String getJobId() {
    method getModelName (line 58) | public String getModelName() {
    method getCmd (line 62) | public WorkerCommands getCmd() {
    method isControlCmd (line 66) | public boolean isControlCmd() {
    method getPayload (line 70) | public RequestInput getPayload() {
    method setScheduled (line 74) | public void setScheduled() {
    method response (line 78) | public void response(
    method sendError (line 122) | public void sendError(HttpResponseStatus status, String error) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/Model.java
  class Model (line 29) | public class Model {
    method Model (line 50) | public Model(ModelArchive modelArchive, int queueSize, String preloadM...
    method getModelName (line 63) | public String getModelName() {
    method getModelDir (line 67) | public File getModelDir() {
    method getModelUrl (line 71) | public String getModelUrl() {
    method getModelArchive (line 75) | public ModelArchive getModelArchive() {
    method getMinWorkers (line 79) | public int getMinWorkers() {
    method setMinWorkers (line 83) | public void setMinWorkers(int minWorkers) {
    method getMaxWorkers (line 87) | public int getMaxWorkers() {
    method setMaxWorkers (line 91) | public void setMaxWorkers(int maxWorkers) {
    method getBatchSize (line 95) | public int getBatchSize() {
    method setBatchSize (line 99) | public void setBatchSize(int batchSize) {
    method getMaxBatchDelay (line 103) | public int getMaxBatchDelay() {
    method setMaxBatchDelay (line 107) | public void setMaxBatchDelay(int maxBatchDelay) {
    method addJob (line 111) | public void addJob(String threadId, Job job) {
    method removeJobQueue (line 120) | public void removeJobQueue(String threadId) {
    method addJob (line 126) | public boolean addJob(Job job) {
    method addFirst (line 130) | public void addFirst(Job job) {
    method pollBatch (line 134) | public void pollBatch(String threadId, long waitTime, Map<String, Job>...
    method getPort (line 185) | public int getPort() {
    method setPort (line 189) | public void setPort(int port) {
    method incrFailedInfReqs (line 193) | public int incrFailedInfReqs() {
    method resetFailedInfReqs (line 197) | public void resetFailedInfReqs() {
    method getResponseTimeoutSeconds (line 201) | public int getResponseTimeoutSeconds() {
    method setResponseTimeoutSeconds (line 205) | public void setResponseTimeoutSeconds(int responseTimeoutSeconds) {
    method getServerThread (line 209) | public WorkerThread getServerThread() {
    method setServerThread (line 213) | public void setServerThread(WorkerThread serverThread) {
    method preloadModel (line 217) | public String preloadModel() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/ModelManager.java
  class ModelManager (line 39) | public final class ModelManager {
    method ModelManager (line 51) | private ModelManager(ConfigManager configManager, WorkLoadManager wlm) {
    method getScheduler (line 59) | public ScheduledExecutorService getScheduler() {
    method init (line 63) | public static void init(ConfigManager configManager, WorkLoadManager w...
    method getInstance (line 67) | public static ModelManager getInstance() {
    method registerModel (line 71) | public ModelArchive registerModel(String url, String defaultModelName,...
    method registerModel (line 86) | public ModelArchive registerModel(
    method unregisterModel (line 144) | public HttpResponseStatus unregisterModel(String modelName) {
    method startBackendServer (line 174) | public void startBackendServer(Model model)
    method updateModel (line 183) | public CompletableFuture<HttpResponseStatus> updateModel(
    method getModels (line 195) | public Map<String, Model> getModels() {
    method getWorkers (line 199) | public List<WorkerThread> getWorkers(String modelName) {
    method getWorkers (line 203) | public Map<Integer, WorkerThread> getWorkers() {
    method addJob (line 207) | public boolean addJob(Job job) throws ModelNotFoundException {
    method workerStatus (line 221) | public void workerStatus(final ChannelHandlerContext ctx) {
    method scaleRequestStatus (line 246) | public boolean scaleRequestStatus(String modelName) {
    method submitTask (line 253) | public void submitTask(Runnable runnable) {
    method getStartupModels (line 257) | public Set<String> getStartupModels() {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkLoadManager.java
  class WorkLoadManager (line 34) | public class WorkLoadManager {
    method WorkLoadManager (line 48) | public WorkLoadManager(ConfigManager configManager, EventLoopGroup bac...
    method getWorkers (line 58) | public List<WorkerThread> getWorkers(String modelName) {
    method getWorkers (line 66) | public Map<Integer, WorkerThread> getWorkers() {
    method hasNoWorker (line 82) | public boolean hasNoWorker(String modelName) {
    method getNumRunningWorkers (line 90) | public int getNumRunningWorkers(String modelName) {
    method modelChanged (line 107) | public CompletableFuture<HttpResponseStatus> modelChanged(Model model) {
    method shutdownServerThread (line 143) | private CompletableFuture<HttpResponseStatus> shutdownServerThread(
    method addServerThread (line 171) | public void addServerThread(Model model, CompletableFuture<HttpRespons...
    method addThreads (line 194) | private void addThreads(
    method scheduleAsync (line 226) | public void scheduleAsync(Runnable r) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkerInitializationException.java
  class WorkerInitializationException (line 15) | public class WorkerInitializationException extends Exception {
    method WorkerInitializationException (line 20) | public WorkerInitializationException(String message) {
    method WorkerInitializationException (line 34) | public WorkerInitializationException(String message, Throwable cause) {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkerLifeCycle.java
  class WorkerLifeCycle (line 34) | public class WorkerLifeCycle {
    method WorkerLifeCycle (line 48) | public WorkerLifeCycle(ConfigManager configManager, Model model) {
    method getEnvString (line 54) | private String[] getEnvString(String cwd, String modelPath, String han...
    method attachIOStreams (line 90) | public synchronized void attachIOStreams(
    method terminateIOStreams (line 99) | public synchronized void terminateIOStreams() {
    method startBackendServer (line 110) | public void startBackendServer(int port)
    method exit (line 180) | public synchronized void exit() {
    method getExitValue (line 188) | public synchronized Integer getExitValue() {
    method setSuccess (line 195) | void setSuccess(boolean success) {
    method getPid (line 200) | public synchronized int getPid() {
    method setPid (line 204) | public synchronized void setPid(int pid) {
    method setPort (line 208) | private synchronized void setPort(int port) {
    method getProcess (line 212) | public Process getProcess() {
    class ReaderThread (line 216) | private static final class ReaderThread extends Thread {
      method ReaderThread (line 225) | public ReaderThread(String name, InputStream is, boolean error, Work...
      method terminate (line 232) | public void terminate() {
      method run (line 236) | @Override

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkerState.java
  type WorkerState (line 15) | public enum WorkerState {

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkerStateListener.java
  class WorkerStateListener (line 19) | public class WorkerStateListener {
    method WorkerStateListener (line 24) | public WorkerStateListener(CompletableFuture<HttpResponseStatus> futur...
    method notifyChangeState (line 29) | public void notifyChangeState(String modelName, WorkerState state, Htt...

FILE: frontend/server/src/main/java/com/amazonaws/ml/mms/wlm/WorkerThread.java
  class WorkerThread (line 51) | public class WorkerThread implements Runnable {
    method getState (line 94) | public WorkerState getState() {
    method getLifeCycle (line 98) | public WorkerLifeCycle getLifeCycle() {
    method WorkerThread (line 102) | public WorkerThread(
    method runWorker (line 140) | private void runWorker()
    method run (line 201) | @Override
    method getWorkerId (line 268) | public String getWorkerId() {
    method getMemory (line 272) | public long getMemory() {
    method setMemory (line 276) | public void setMemory(long memory) {
    method connect (line 280) | private void connect()
    method isRunning (line 364) | public boolean isRunning() {
    method getGpuId (line 368) | public int getGpuId() {
    method getStartTime (line 372) | public long getStartTime() {
    method getPid (line 376) | public int getPid() {
    method shutdown (line 380) | public void shutdown() {
    method isServerThread (line 411) | public boolean isServerThread() {
    method getWorkerName (line 415) | private final String getWorkerName() {
    method setState (line 423) | void setState(WorkerState newState, HttpResponseStatus status) {
    method retry (line 439) | void retry() {
    class WorkerHandler (line 456) | @ChannelHandler.Sharable
      method channelRead0 (line 459) | @Override
      method exceptionCaught (line 466) | @Override

FILE: frontend/server/src/test/java/com/amazonaws/ml/mms/CoverageTest.java
  class CoverageTest (line 19) | public class CoverageTest {
    method test (line 21) | @Test

FILE: frontend/server/src/test/java/com/amazonaws/ml/mms/ModelServerTest.java
  class ModelServerTest (line 78) | public class ModelServerTest {
    method beforeSuite (line 99) | @BeforeSuite
    method afterSuite (line 123) | @AfterSuite
    method test (line 128) | @Test
    method testRoot (line 219) | private void testRoot(Channel channel, String expected) throws Interru...
    method testPing (line 229) | private void testPing(Channel channel) throws InterruptedException {
    method testApiDescription (line 241) | private void testApiDescription(Channel channel, String expected) thro...
    method testDescribeApi (line 253) | private void testDescribeApi(Channel channel) throws InterruptedExcept...
    method testLoadModel (line 265) | private void testLoadModel(Channel channel) throws InterruptedException {
    method testLoadModelWithInitialWorkers (line 280) | private void testLoadModelWithInitialWorkers(Channel channel) throws I...
    method testLoadModelWithInitialWorkersWithJSONReqBody (line 297) | private void testLoadModelWithInitialWorkersWithJSONReqBody(Channel ch...
    method testScaleModel (line 318) | private void testScaleModel(Channel channel) throws InterruptedExcepti...
    method testSyncScaleModel (line 331) | private void testSyncScaleModel(Channel channel) throws InterruptedExc...
    method testUnregisterModel (line 346) | private void testUnregisterModel(Channel channel) throws InterruptedEx...
    method testListModels (line 359) | private void testListModels(Channel channel) throws InterruptedExcepti...
    method testDescribeModel (line 372) | private void testDescribeModel(Channel channel) throws InterruptedExce...
    method testPredictions (line 385) | private void testPredictions(Channel channel) throws InterruptedExcept...
    method testPredictionsJson (line 403) | private void testPredictionsJson(Channel channel) throws InterruptedEx...
    method testPredictionsBinary (line 418) | private void testPredictionsBinary(Channel channel) throws Interrupted...
    method testInvocationsJson (line 434) | private void testInvocationsJson(Channel channel) throws InterruptedEx...
    method testInvocationsMultipart (line 449) | private void testInvocationsMultipart(Channel channel)
    method testModelsInvokeJson (line 474) | private void testModelsInvokeJson(Channel channel) throws InterruptedE...
    method testModelsInvokeMultipart (line 489) | private void testModelsInvokeMultipart(Channel channel)
    method testPredictionsInvalidRequestSize (line 514) | private void testPredictionsInvalidRequestSize(Channel channel) throws...
    method testPredictionsValidRequestSize (line 531) | private void testPredictionsValidRequestSize(Channel channel) throws I...
    method loadTests (line 548) | private void loadTests(Channel channel, String model, String modelName)
    method unloadTests (line 563) | private void unloadTests(Channel channel, String modelName) throws Int...
    method setConfiguration (line 575) | private void setConfiguration(String key, String val)
    method testModelRegisterWithDefaultWorkers (line 583) | private void testModelRegisterWithDefaultWorkers(Channel mgmtChannel)
    method testPredictionsDecodeRequest (line 603) | private void testPredictionsDecodeRequest(Channel inferChannel, Channe...
    method testPredictionsDoNotDecodeRequest (line 625) | private void testPredictionsDoNotDecodeRequest(Channel inferChannel, C...
    method testPredictionsModifyResponseHeader (line 647) | private void testPredictionsModifyResponseHeader(
    method testPredictionsNoManifest (line 672) | private void testPredictionsNoManifest(Channel inferChannel, Channel m...
    method testLegacyPredict (line 694) | private void testLegacyPredict(Channel channel) throws InterruptedExce...
    method testInvalidRootRequest (line 706) | private void testInvalidRootRequest() throws InterruptedException {
    method testInvalidInferenceUri (line 720) | private void testInvalidInferenceUri() throws InterruptedException {
    method testInvalidDescribeModel (line 735) | private void testInvalidDescribeModel() throws InterruptedException {
    method testInvalidPredictionsUri (line 751) | private void testInvalidPredictionsUri() throws InterruptedException {
    method testPredictionsModelNotFound (line 766) | private void testPredictionsModelNotFound() throws InterruptedException {
    method testInvalidManagementUri (line 782) | private void testInvalidManagementUri() throws InterruptedException {
    method testInvalidModelsMethod (line 797) | private void testInvalidModelsMethod() throws InterruptedException {
    method testInvalidModelMethod (line 812) | private void testInvalidModelMethod() throws InterruptedException {
    method testDescribeModelNotFound (line 827) | private void testDescribeModelNotFound() throws InterruptedException {
    method testRegisterModelMissingUrl (line 843) | private void testRegisterModelMissingUrl() throws InterruptedException {
    method testRegisterModelInvalidRuntime (line 858) | private void testRegisterModelInvalidRuntime() throws InterruptedExcep...
    method testRegisterModelNotFound (line 876) | private void testRegisterModelNotFound() throws InterruptedException {
    method testRegisterModelConflict (line 892) | private void testRegisterModelConflict() throws InterruptedException {
    method testRegisterModelMalformedUrl (line 918) | private void testRegisterModelMalformedUrl() throws InterruptedExcepti...
    method testRegisterModelConnectionFailed (line 936) | private void testRegisterModelConnectionFailed() throws InterruptedExc...
    method testRegisterModelHttpError (line 956) | private void testRegisterModelHttpError() throws InterruptedException {
    method testRegisterModelInvalidPath (line 976) | private void testRegisterModelInvalidPath() throws InterruptedException {
    method testScaleModelNotFound (line 994) | private void testScaleModelNotFound() throws InterruptedException {
    method testUnregisterModelNotFound (line 1009) | private void testUnregisterModelNotFound() throws InterruptedException {
    method testUnregisterModelTimeout (line 1024) | private void testUnregisterModelTimeout()
    method testScaleModelFailure (line 1052) | private void testScaleModelFailure() throws InterruptedException {
    method testInvalidModel (line 1087) | private void testInvalidModel() throws InterruptedException {
    method testLoadingMemoryError (line 1132) | private void testLoadingMemoryError() throws InterruptedException {
    method testPredictionMemoryError (line 1149) | private void testPredictionMemoryError() throws InterruptedException {
    method testPredictionCustomErrorCode (line 1194) | private void testPredictionCustomErrorCode() throws InterruptedExcepti...
    method testErrorBatch (line 1228) | private void testErrorBatch() throws InterruptedException {
    method testMetricManager (line 1271) | private void testMetricManager() throws JsonParseException, Interrupte...
    method testLogging (line 1301) | private void testLogging(Channel inferChannel, Channel mgmtChannel)
    method testLoggingUnload (line 1339) | private void testLoggingUnload(Channel inferChannel, Channel mgmtChannel)
    method connect (line 1358) | private Channel connect(boolean management) {
    class TestHandler (line 1395) | @ChannelHandler.Sharable
      method channelRead0 (line 1398) | @Override
      method exceptionCaught (line 1406) | @Override

FILE: frontend/server/src/test/java/com/amazonaws/ml/mms/TestUtils.java
  class TestUtils (line 20) | public final class TestUtils {
    method TestUtils (line 22) | private TestUtils() {}
    method init (line 24) | public static void init() {

FILE: frontend/server/src/test/java/com/amazonaws/ml/mms/test/TestHelper.java
  class TestHelper (line 29) | public final class TestHelper {
    method TestHelper (line 31) | private TestHelper() {}
    method testGetterSetters (line 33) | public static void testGetterSetters(Class<?> baseClass)
    method getClasses (line 75) | private static List<Class<?>> getClasses(Class<?> clazz)
    method getMockValue (line 117) | private static Object getMockValue(Class<?> type) {

FILE: frontend/server/src/test/java/com/amazonaws/ml/mms/util/ConfigManagerTest.java
  class ConfigManagerTest (line 32) | public class ConfigManagerTest {
    method createMetric (line 37) | private Metric createMetric(String metricName, String requestId) {
    method modifyEnv (line 52) | @SuppressWarnings("unchecked")
    method test (line 82) | @Test
    method testNoEnvVars (line 117) | @Test
    method testResponseTimeoutSeconds (line 130) | @Test

FILE: mms/arg_parser.py
  class ArgParser (line 20) | class ArgParser(object):
    method mms_parser (line 26) | def mms_parser():
    method str2bool (line 59) | def str2bool(v):
    method model_service_worker_args (line 67) | def model_service_worker_args():
    method extract_args (line 124) | def extract_args(args=None):

FILE: mms/context.py
  class Context (line 16) | class Context(object):
    method __init__ (line 22) | def __init__(self, model_name, model_dir, manifest, batch_size, gpu, m...
    method system_properties (line 37) | def system_properties(self):
    method request_processor (line 41) | def request_processor(self):
    method request_processor (line 45) | def request_processor(self, request_processor):
    method metrics (line 49) | def metrics(self):
    method metrics (line 53) | def metrics(self, metrics):
    method get_request_id (line 56) | def get_request_id(self, idx=0):
    method get_request_header (line 59) | def get_request_header(self, idx, key):
    method get_all_request_header (line 62) | def get_all_request_header(self, idx):
    method set_response_content_type (line 65) | def set_response_content_type(self, idx, value):
    method get_response_content_type (line 68) | def get_response_content_type(self, idx):
    method get_response_status (line 71) | def get_response_status(self, idx):
    method set_response_status (line 75) | def set_response_status(self, code=200, phrase="", idx=0):
    method set_all_response_status (line 87) | def set_all_response_status(self, code=200, phrase=""):
    method get_response_headers (line 97) | def get_response_headers(self, idx):
    method set_response_header (line 100) | def set_response_header(self, idx, key, value):
    method __eq__ (line 105) | def __eq__(self, other):
  class RequestProcessor (line 109) | class RequestProcessor(object):
    method __init__ (line 114) | def __init__(self, request_header):
    method get_request_property (line 120) | def get_request_property(self, key):
    method report_status (line 123) | def report_status(self, code, reason_phrase=None):
    method get_response_status_code (line 127) | def get_response_status_code(self):
    method get_response_status_phrase (line 130) | def get_response_status_phrase(self):
    method add_response_property (line 133) | def add_response_property(self, key, value):
    method get_response_headers (line 136) | def get_response_headers(self):
    method get_response_header (line 139) | def get_response_header(self, key):
    method get_request_properties (line 142) | def get_request_properties(self):

FILE: mms/export_model.py
  function main (line 16) | def main():

FILE: mms/metrics/dimension.py
  class Dimension (line 16) | class Dimension(object):
    method __init__ (line 20) | def __init__(self, name, value):
    method __str__ (line 34) | def __str__(self):
    method to_dict (line 41) | def to_dict(self):

FILE: mms/metrics/metric.py
  class Metric (line 25) | class Metric(object):
    method __init__ (line 30) | def __init__(self, name, value,
    method update (line 62) | def update(self, value):
    method __str__ (line 77) | def __str__(self):
    method to_dict (line 87) | def to_dict(self):

FILE: mms/metrics/metric_encoder.py
  class MetricEncoder (line 22) | class MetricEncoder(JSONEncoder):
    method default (line 26) | def default(self, obj):  # pylint: disable=arguments-differ, method-hi...

FILE: mms/metrics/metrics_store.py
  class MetricsStore (line 20) | class MetricsStore(object):
    method __init__ (line 25) | def __init__(self, request_ids, model_name):
    method _add_or_update (line 34) | def _add_or_update(self, name, value, req_id, unit, metrics_method=Non...
    method _get_req (line 74) | def _get_req(self, idx):
    method add_counter (line 92) | def add_counter(self, name, value, idx=None, dimensions=None):
    method add_time (line 111) | def add_time(self, name, value, idx=None, unit='ms', dimensions=None):
    method add_size (line 133) | def add_size(self, name, value, idx=None, unit='MB', dimensions=None):
    method add_percent (line 155) | def add_percent(self, name, value, idx=None, dimensions=None):
    method add_error (line 174) | def add_error(self, name, value, dimensions=None):
    method add_metric (line 191) | def add_metric(self, name, value, idx=None, unit=None, dimensions=None):

FILE: mms/metrics/process_memory_metric.py
  function get_cpu_usage (line 20) | def get_cpu_usage(pid):
  function check_process_mem_usage (line 39) | def check_process_mem_usage(stdin):

FILE: mms/metrics/system_metrics.py
  function cpu_utilization (line 26) | def cpu_utilization():
  function memory_used (line 31) | def memory_used():
  function memory_available (line 36) | def memory_available():
  function memory_utilization (line 41) | def memory_utilization():
  function disk_used (line 46) | def disk_used():
  function disk_utilization (line 51) | def disk_utilization():
  function disk_available (line 56) | def disk_available():
  function collect_all (line 61) | def collect_all(mod):

FILE: mms/metrics/unit.py
  class Units (line 15) | class Units(object):
    method __init__ (line 20) | def __init__(self):

FILE: mms/model_loader.py
  class ModelLoaderFactory (line 29) | class ModelLoaderFactory(object):
    method get_model_loader (line 35) | def get_model_loader(model_dir):
  class ModelLoader (line 45) | class ModelLoader(object):
    method load (line 52) | def load(self, model_name, model_dir, handler, gpu_id, batch_size):
    method list_model_services (line 66) | def list_model_services(module, parent_class=None):
  class MmsModelLoader (line 85) | class MmsModelLoader(ModelLoader):
    method load (line 90) | def load(self, model_name, model_dir, handler, gpu_id, batch_size):
    method unload (line 158) | def unload(self):
  class LegacyModelLoader (line 166) | class LegacyModelLoader(ModelLoader):
    method load (line 171) | def load(self, model_name, model_dir, handler, gpu_id, batch_size):

FILE: mms/model_server.py
  function old_start (line 17) | def old_start():
  function start (line 27) | def start():
  function load_properties (line 165) | def load_properties(file_path):

FILE: mms/model_service/gluon_vision_service.py
  class GluonVisionService (line 19) | class GluonVisionService(GluonImperativeBaseService):
    method _preprocess (line 25) | def _preprocess(self, data):
    method _inference (line 43) | def _inference(self, data):
    method _postprocess (line 63) | def _postprocess(self, data):

FILE: mms/model_service/model_service.py
  class ModelService (line 23) | class ModelService(object):
    method __init__ (line 32) | def __init__(self, model_name, model_dir, manifest, gpu=None):  # pyli...
    method initialize (line 37) | def initialize(self, context):
    method inference (line 56) | def inference(self, data):
    method ping (line 74) | def ping(self):
    method signature (line 85) | def signature(self):
    method handle (line 97) | def handle(self, data, context):  # pylint: disable=unused-argument
  class SingleNodeService (line 131) | class SingleNodeService(ModelService):
    method inference (line 137) | def inference(self, data):
    method _inference (line 166) | def _inference(self, data):
    method _preprocess (line 183) | def _preprocess(self, data):
    method _postprocess (line 200) | def _postprocess(self, data):

FILE: mms/model_service/mxnet_model_service.py
  function check_input_shape (line 24) | def check_input_shape(inputs, signature):
  class MXNetBaseService (line 57) | class MXNetBaseService(SingleNodeService):
    method __init__ (line 64) | def __init__(self, model_name, model_dir, manifest, gpu=None):
    method _preprocess (line 117) | def _preprocess(self, data):
    method _postprocess (line 120) | def _postprocess(self, data):
    method _inference (line 123) | def _inference(self, data):
    method ping (line 153) | def ping(self):
    method signature (line 165) | def signature(self):
  class GluonImperativeBaseService (line 177) | class GluonImperativeBaseService(SingleNodeService):
    method __init__ (line 183) | def __init__(self, model_name, model_dir, manifest, net=None, gpu=None):
    method _preprocess (line 218) | def _preprocess(self, data):
    method _postprocess (line 221) | def _postprocess(self, data):
    method _inference (line 224) | def _inference(self, data):
    method ping (line 227) | def ping(self):
    method signature (line 239) | def signature(self):

FILE: mms/model_service/mxnet_vision_service.py
  class MXNetVisionService (line 18) | class MXNetVisionService(MXNetBaseService):
    method _preprocess (line 24) | def _preprocess(self, data):
    method _postprocess (line 36) | def _postprocess(self, data):

FILE: mms/model_service_worker.py
  class MXNetModelServiceWorker (line 36) | class MXNetModelServiceWorker(object):
    method __init__ (line 40) | def __init__(self, s_type=None, s_name=None, host_addr=None, port_num=...
    method load_model (line 77) | def load_model(self, load_model_request=None):
    method _create_io_files (line 116) | def _create_io_files(self, tmp_dir, io_fd):
    method _remap_io (line 123) | def _remap_io(self):
    method handle_connection (line 129) | def handle_connection(self, cl_socket):
    method sigterm_handler (line 158) | def sigterm_handler(self):
    method start_worker (line 165) | def start_worker(self, cl_socket):
    method run_server (line 189) | def run_server(self):

FILE: mms/protocol/otf_message_handler.py
  function retrieve_msg (line 29) | def retrieve_msg(conn):
  function encode_response_headers (line 47) | def encode_response_headers(resp_hdr_map):
  function create_predict_response (line 58) | def create_predict_response(ret, req_id_map, message, code, context=None):
  function create_load_model_response (line 137) | def create_load_model_response(code, message):
  function _retrieve_buffer (line 156) | def _retrieve_buffer(conn, length):
  function _retrieve_int (line 171) | def _retrieve_int(conn):
  function _retrieve_load_msg (line 176) | def _retrieve_load_msg(conn):
  function _retrieve_inference_msg (line 207) | def _retrieve_inference_msg(conn):
  function _retrieve_request (line 225) | def _retrieve_request(conn):
  function _retrieve_reqest_header (line 260) | def _retrieve_reqest_header(conn):
  function _retrieve_input_data (line 281) | def _retrieve_input_data(conn):

FILE: mms/service.py
  class Service (line 28) | class Service(object):
    method __init__ (line 33) | def __init__(self, model_name, model_dir, manifest, entry_point, gpu, ...
    method context (line 38) | def context(self):
    method retrieve_data_for_inference (line 42) | def retrieve_data_for_inference(batch):
    method predict (line 87) | def predict(self, batch):
  class PredictionException (line 134) | class PredictionException(Exception):
    method __init__ (line 135) | def __init__(self, message, error_code=500):
    method __str__ (line 140) | def __str__(self):
  function emit_metrics (line 144) | def emit_metrics(metrics):

FILE: mms/tests/unit_tests/helper/pixel2pixel_service.py
  class UnetSkipUnit (line 24) | class UnetSkipUnit(HybridBlock):
    method __init__ (line 25) | def __init__(self, inner_channels, outer_channels, inner_block=None, i...
    method hybrid_forward (line 64) | def hybrid_forward(self, F, x):
  class UnetGenerator (line 72) | class UnetGenerator(HybridBlock):
    method __init__ (line 73) | def __init__(self, in_channels, num_downs, ngf=64, use_dropout=True):
    method hybrid_forward (line 88) | def hybrid_forward(self, F, x):
  class Pixel2pixelService (line 91) | class Pixel2pixelService(MXNetBaseService):
    method __init__ (line 93) | def __init__(self, model_name, path):
    method _preprocess (line 97) | def _preprocess(self, data):
    method _inference (line 106) | def _inference(self, data):
    method _postprocess (line 110) | def _postprocess(self, data):

FILE: mms/tests/unit_tests/model_service/dummy_model/dummy_model_service.py
  class DummyNodeService (line 18) | class DummyNodeService(SingleNodeService):
    method _inference (line 19) | def _inference(self, data):
    method signature (line 22) | def signature(self):
    method ping (line 25) | def ping(self):
    method inference (line 28) | def inference(self):
  class SomeOtherClass (line 32) | class SomeOtherClass:
    method __init__ (line 33) | def __init__(self):

FILE: mms/tests/unit_tests/model_service/test_mxnet_image.py
  class TestMXNetImageUtils (line 23) | class TestMXNetImageUtils(unittest.TestCase):
    method _write_image (line 24) | def _write_image(self, img_arr, flag=1):
    method test_transform_shape (line 35) | def test_transform_shape(self):
    method test_read (line 44) | def test_read(self):
    method test_write (line 55) | def test_write(self):
    method test_resize (line 64) | def test_resize(self):
    method test_fix_crop (line 69) | def test_fix_crop(self):
    method test_color_normalize (line 74) | def test_color_normalize(self):
    method runTest (line 79) | def runTest(self):

FILE: mms/tests/unit_tests/model_service/test_mxnet_ndarray.py
  class TestMXNetNDArrayUtils (line 20) | class TestMXNetNDArrayUtils(unittest.TestCase):
    method test_top_prob (line 21) | def test_top_prob(self):
    method runTest (line 28) | def runTest(self):

FILE: mms/tests/unit_tests/model_service/test_mxnet_nlp.py
  class TestMXNetNLPUtils (line 21) | class TestMXNetNLPUtils(unittest.TestCase):
    method test_encode_sentence (line 22) | def test_encode_sentence(self):
    method test_pad_sentence (line 42) | def test_pad_sentence(self):

FILE: mms/tests/unit_tests/model_service/test_service.py
  function empty_file (line 30) | def empty_file(path):
  function module_dir (line 34) | def module_dir(tmpdir):
  function create_symbolic_manifest (line 67) | def create_symbolic_manifest(path):
  function create_imperative_manifest (line 90) | def create_imperative_manifest(path):
  class TestService (line 113) | class TestService(unittest.TestCase):
    method setUp (line 114) | def setUp(self):
    method tearDown (line 117) | def tearDown(self):
    method _train_and_export (line 120) | def _train_and_export(self, path):
    method _write_image (line 158) | def _write_image(self, img_arr):
    method test_vision_init (line 166) | def test_vision_init(self):
    method test_vision_inference (line 172) | def test_vision_inference(self):
    method test_gluon_inference (line 178) | def test_gluon_inference(self):
    method test_mxnet_model_service (line 215) | def test_mxnet_model_service(self):
    method test_gluon_model_service (line 227) | def test_gluon_model_service(self):
    method runTest (line 239) | def runTest(self):

FILE: mms/tests/unit_tests/test_beckend_metric.py
  function get_model_key (line 12) | def get_model_key(name, unit, req_id, model_name):
  function get_error_key (line 20) | def get_error_key(name, unit):
  function test_metrics (line 27) | def test_metrics(caplog):

FILE: mms/tests/unit_tests/test_model_loader.py
  class TestModelFactory (line 29) | class TestModelFactory:
    method test_model_loader_factory_legacy (line 31) | def test_model_loader_factory_legacy(self):
    method test_model_loader_factory (line 37) | def test_model_loader_factory(self):
  class TestListModels (line 45) | class TestListModels:
    method test_list_models_legacy (line 47) | def test_list_models_legacy(self):
    method test_list_models (line 55) | def test_list_models(self):
  class TestLoadModels (line 66) | class TestLoadModels:
    method patches (line 73) | def patches(self, mocker):
    method test_load_model_legacy (line 83) | def test_load_model_legacy(self, patches):
    method test_load_class_model (line 96) | def test_load_class_model(self, patches):
    method test_load_func_model (line 106) | def test_load_func_model(self, patches):
    method test_load_func_model_with_error (line 117) | def test_load_func_model_with_error(self, patches):
    method test_load_model_with_error (line 126) | def test_load_model_with_error(self, patches):

FILE: mms/tests/unit_tests/test_model_service_worker.py
  function socket_patches (line 27) | def socket_patches(mocker):
  function model_service_worker (line 42) | def model_service_worker(socket_patches):
  class TestInit (line 50) | class TestInit:
    method test_missing_socket_name (line 53) | def test_missing_socket_name(self):
    method test_socket_in_use (line 57) | def test_socket_in_use(self, mocker):
    method patches (line 67) | def patches(self, mocker):
    method test_success (line 75) | def test_success(self, patches):
  class TestRunServer (line 82) | class TestRunServer:
    method test_with_socket_bind_error (line 85) | def test_with_socket_bind_error(self, socket_patches, model_service_wo...
    method test_with_timeout (line 94) | def test_with_timeout(self, socket_patches, model_service_worker):
    method test_with_run_server_debug (line 104) | def test_with_run_server_debug(self, socket_patches, model_service_wor...
    method test_success (line 117) | def test_success(self, model_service_worker):
  class TestLoadModel (line 128) | class TestLoadModel:
    method patches (line 132) | def patches(self, mocker):
    method test_load_model (line 137) | def test_load_model(self, patches, model_service_worker):
    method test_optional_args (line 145) | def test_optional_args(self, patches, model_service_worker, batch_size...
  class TestHandleConnection (line 155) | class TestHandleConnection:
    method patches (line 159) | def patches(self, mocker):
    method test_handle_connection (line 166) | def test_handle_connection(self, patches, model_service_worker):

FILE: mms/tests/unit_tests/test_otf_codec_protocol.py
  function socket_patches (line 27) | def socket_patches(mocker):
  class TestOtfCodecHandler (line 35) | class TestOtfCodecHandler:
    method test_retrieve_msg_unknown (line 37) | def test_retrieve_msg_unknown(self, socket_patches):
    method test_retrieve_msg_load_gpu (line 42) | def test_retrieve_msg_load_gpu(self, socket_patches):
    method test_retrieve_msg_load_no_gpu (line 61) | def test_retrieve_msg_load_no_gpu(self, socket_patches):
    method test_retrieve_msg_predict (line 79) | def test_retrieve_msg_predict(self, socket_patches):
    method test_retrieve_msg_predict_text (line 104) | def test_retrieve_msg_predict_text(self, socket_patches):
    method test_retrieve_msg_predict_binary (line 129) | def test_retrieve_msg_predict_binary(self, socket_patches):
    method test_create_load_model_response (line 154) | def test_create_load_model_response(self):
    method test_create_predict_response (line 159) | def test_create_predict_response(self):
    method test_create_predict_response_with_error (line 164) | def test_create_predict_response_with_error(self):

FILE: mms/tests/unit_tests/test_utils/dummy_class_model_service.py
  class CustomService (line 17) | class CustomService(object):
    method initialize (line 19) | def initialize(self, context):
    method handle (line 23) | def handle(self, data, context):

FILE: mms/tests/unit_tests/test_utils/dummy_func_model_service.py
  function infer (line 19) | def infer(data, context):

FILE: mms/tests/unit_tests/test_version.py
  function test_mms_version (line 17) | def test_mms_version():

FILE: mms/tests/unit_tests/test_worker_service.py
  class TestService (line 15) | class TestService:
    method service (line 27) | def service(self, mocker):
    method test_predict (line 33) | def test_predict(self, service, mocker):
    method test_with_nil_request (line 38) | def test_with_nil_request(self, service):
    method test_valid_req (line 42) | def test_valid_req(self, service):
  class TestEmitMetrics (line 50) | class TestEmitMetrics:
    method test_emit_metrics (line 52) | def test_emit_metrics(self, caplog):

FILE: mms/utils/mxnet/image.py
  function transform_shape (line 23) | def transform_shape(img_arr, dim_order='NCHW'):
  function read (line 47) | def read(buf, flag=1, to_rgb=True, out=None):
  function write (line 82) | def write(img_arr, flag=1, format='jpeg', dim_order='CHW'):  # pylint: d...
  function resize (line 120) | def resize(src, new_width, new_height, interp=2):
  function fixed_crop (line 161) | def fixed_crop(src, x0, y0, w, h, size=None, interp=2):
  function color_normalize (line 190) | def color_normalize(src, mean, std=None):

FILE: mms/utils/mxnet/ndarray.py
  function top_probability (line 18) | def top_probability(data, labels, top=5):

FILE: mms/utils/mxnet/nlp.py
  function encode_sentences (line 19) | def encode_sentences(sentences, vocab=None, invalid_label=-1, invalid_ke...
  function pad_sentence (line 71) | def pad_sentence(sentence, buckets, invalid_label=-1, data_name='data', ...

FILE: mms/utils/timeit_decorator.py
  function timeit (line 18) | def timeit(func):

FILE: model-archiver/model_archiver/arg_parser.py
  class ArgParser (line 22) | class ArgParser(object):
    method export_model_args_parser (line 30) | def export_model_args_parser():

FILE: model-archiver/model_archiver/manifest_components/engine.py
  class EngineType (line 16) | class EngineType(Enum):
  class Engine (line 22) | class Engine(object):
    method __init__ (line 27) | def __init__(self, engine_name, engine_version=None):
    method __to_dict__ (line 33) | def __to_dict__(self):
    method __str__ (line 42) | def __str__(self):
    method __repr__ (line 45) | def __repr__(self):

FILE: model-archiver/model_archiver/manifest_components/manifest.py
  class RuntimeType (line 17) | class RuntimeType(Enum):
  class Manifest (line 26) | class Manifest(object):
    method __init__ (line 31) | def __init__(self, runtime, model, engine=None, specification_version=...
    method __to_dict__ (line 46) | def __to_dict__(self):
    method __str__ (line 79) | def __str__(self):
    method __repr__ (line 82) | def __repr__(self):

FILE: model-archiver/model_archiver/manifest_components/model.py
  class Model (line 15) | class Model(object):
    method __init__ (line 21) | def __init__(self, model_name, handler, description=None, model_versio...
    method __to_dict__ (line 29) | def __to_dict__(self):
    method __str__ (line 47) | def __str__(self):
    method __repr__ (line 50) | def __repr__(self):

FILE: model-archiver/model_archiver/manifest_components/publisher.py
  class Publisher (line 15) | class Publisher(object):
    method __init__ (line 20) | def __init__(self, author, email):
    method __to_dict__ (line 25) | def __to_dict__(self):
    method __str__ (line 32) | def __str__(self):
    method __repr__ (line 35) | def __repr__(self):

FILE: model-archiver/model_archiver/model_archiver_error.py
  class ModelArchiverError (line 15) | class ModelArchiverError(Exception):
    method __init__ (line 19) | def __init__(self, message):

FILE: model-archiver/model_archiver/model_packaging.py
  function package_model (line 22) | def package_model(args, manifest):
  function generate_model_archive (line 55) | def generate_model_archive():

FILE: model-archiver/model_archiver/model_packaging_utils.py
  class ModelExportUtils (line 41) | class ModelExportUtils(object):
    method get_archive_export_path (line 48) | def get_archive_export_path(export_file_path, model_name, archive_form...
    method check_mar_already_exists (line 52) | def check_mar_already_exists(model_name, export_file_path, overwrite, ...
    method check_custom_model_types (line 78) | def check_custom_model_types(model_path, model_name=None):
    method find_unique (line 103) | def find_unique(files, suffix):
    method convert_onnx_model (line 122) | def convert_onnx_model(model_path, onnx_file, model_name):
    method generate_publisher (line 192) | def generate_publisher(publisherargs):
    method generate_engine (line 197) | def generate_engine(engineargs):
    method generate_model (line 202) | def generate_model(modelargs):
    method generate_manifest_json (line 207) | def generate_manifest_json(args):
    method clean_temp_files (line 226) | def clean_temp_files(temp_files):
    method make_dir (line 231) | def make_dir(d):
    method archive (line 236) | def archive(export_file, model_name, model_path, files_to_exclude, man...
    method archive_dir (line 283) | def archive_dir(path, dst, files_to_exclude, archive_format, model_name):
    method directory_filter (line 313) | def directory_filter(directory, unwanted_dirs):
    method file_filter (line 328) | def file_filter(current_file, files_to_exclude):
    method check_model_name_regex_or_exit (line 345) | def check_model_name_regex_or_exit(model_name):
    method validate_inputs (line 358) | def validate_inputs(model_path, model_name, export_path):

FILE: model-archiver/model_archiver/tests/integ_tests/resources/onnx_model/service.py
  function handle (line 6) | def handle():

FILE: model-archiver/model_archiver/tests/integ_tests/resources/regular_model/service.py
  function handle (line 6) | def handle():

FILE: model-archiver/model_archiver/tests/integ_tests/test_integration_model_archiver.py
  function update_tests (line 16) | def update_tests(test):
  function create_file_path (line 26) | def create_file_path(path):
  function delete_file_path (line 36) | def delete_file_path(path):
  function run_test (line 46) | def run_test(test, cmd):
  function validate_archive_exists (line 59) | def validate_archive_exists(test):
  function validate_manifest_file (line 69) | def validate_manifest_file(manifest, test):
  function validate_files (line 81) | def validate_files(file_list, prefix, regular):
  function validate_tar_archive (line 92) | def validate_tar_archive(test_cfg):
  function validate_noarchive_archive (line 101) | def validate_noarchive_archive(test):
  function validate_mar_archive (line 107) | def validate_mar_archive(test):
  function validate_archive_content (line 115) | def validate_archive_content(test):
  function validate (line 125) | def validate(test):
  function test_model_archiver (line 130) | def test_model_archiver():

FILE: model-archiver/model_archiver/tests/unit_tests/test_model_packaging.py
  class TestModelPackaging (line 22) | class TestModelPackaging:
    class Namespace (line 24) | class Namespace:
      method __init__ (line 25) | def __init__(self, **kwargs):
      method update (line 28) | def update(self, **kwargs):
    method patches (line 44) | def patches(self, mocker):
    method test_gen_model_archive (line 52) | def test_gen_model_archive(self, patches):
    method test_export_model_method (line 57) | def test_export_model_method(self, patches):
    method test_export_model_method_tar (line 67) | def test_export_model_method_tar(self, patches):
    method test_export_model_method_noarchive (line 78) | def test_export_model_method_noarchive(self, patches):

FILE: model-archiver/model_archiver/tests/unit_tests/test_model_packaging_utils.py
  class TestExportModelUtils (line 22) | class TestExportModelUtils:
    class TestMarExistence (line 25) | class TestMarExistence:
      method patches (line 28) | def patches(self, mocker):
      method test_export_file_is_none (line 36) | def test_export_file_is_none(self, patches):
      method test_export_file_is_not_none (line 43) | def test_export_file_is_not_none(self, patches):
      method test_export_file_already_exists_with_override (line 49) | def test_export_file_already_exists_with_override(self, patches):
      method test_export_file_already_exists_with_override_false (line 56) | def test_export_file_already_exists_with_override_false(self, patches):
      method test_export_file_is_none_tar (line 64) | def test_export_file_is_none_tar(self, patches):
      method test_export_file_is_none_tar (line 71) | def test_export_file_is_none_tar(self, patches):
    class TestArchiveTypes (line 79) | class TestArchiveTypes:
      method test_archive_types (line 80) | def test_archive_types(self):
    class TestCustomModelTypes (line 88) | class TestCustomModelTypes:
      method patches (line 93) | def patches(self, mocker):
      method test_onnx_file_is_none (line 101) | def test_onnx_file_is_none(self, patches):
      method test_onnx_file_is_not_none (line 108) | def test_onnx_file_is_not_none(self, patches):
    class TestFindUnique (line 123) | class TestFindUnique:
      method test_with_count_zero (line 125) | def test_with_count_zero(self):
      method test_with_count_one (line 131) | def test_with_count_one(self):
      method test_with_exit (line 137) | def test_with_exit(self):
    class TestCleanTempFiles (line 144) | class TestCleanTempFiles:
      method patches (line 147) | def patches(self, mocker):
      method test_clean_call (line 154) | def test_clean_call(self, patches):
    class TestGenerateManifestProps (line 162) | class TestGenerateManifestProps:
      class Namespace (line 164) | class Namespace:
        method __init__ (line 165) | def __init__(self, **kwargs):
      method test_publisher (line 177) | def test_publisher(self):
      method test_engine (line 182) | def test_engine(self):
      method test_model (line 186) | def test_model(self):
      method test_manifest_json (line 191) | def test_manifest_json(self):
    class TestModelNameRegEx (line 201) | class TestModelNameRegEx:
      method test_regex_pass (line 203) | def test_regex_pass(self):
      method test_regex_fail (line 208) | def test_regex_fail(self):
    class TestFileFilter (line 216) | class TestFileFilter:
      method test_with_return_false (line 220) | def test_with_return_false(self):
      method test_with_pyc (line 223) | def test_with_pyc(self):
      method test_with_ds_store (line 226) | def test_with_ds_store(self):
      method test_with_return_true (line 229) | def test_with_return_true(self):
    class TestDirectoryFilter (line 233) | class TestDirectoryFilter:
      method test_with_unwanted_dirs (line 237) | def test_with_unwanted_dirs(self):
      method test_with_starts_with_dot (line 240) | def test_with_starts_with_dot(self):
      method test_with_return_true (line 243) | def test_with_return_true(self):

FILE: model-archiver/model_archiver/tests/unit_tests/test_version.py
  function test_model_export_tool_version (line 15) | def test_model_export_tool_version():

FILE: model-archiver/setup.py
  function pypi_description (line 40) | def pypi_description():
  function detect_model_archiver_version (line 46) | def detect_model_archiver_version():

FILE: plugins/endpoints/src/main/java/software/amazon/ai/mms/plugins/endpoint/ExecutionParameters.java
  class ExecutionParameters (line 15) | @Endpoint(
    method doGet (line 21) | @Override
    class ExecutionParametersResponse (line 40) | public static class ExecutionParametersResponse {
      method ExecutionParametersResponse (line 50) | public ExecutionParametersResponse() {
      method getMaxConcurrentTransforms (line 56) | public int getMaxConcurrentTransforms() {
      method getBatchStrategy (line 60) | public String getBatchStrategy() {
      method getMaxPayloadInMB (line 64) | public int getMaxPayloadInMB() {
      method setMaxConcurrentTransforms (line 68) | public void setMaxConcurrentTransforms(int newMaxConcurrentTransform...
      method setBatchStrategy (line 72) | public void setBatchStrategy(String newBatchStrategy) {
      method setMaxPayloadInMB (line 76) | public void setMaxPayloadInMB(int newMaxPayloadInMB) {

FILE: plugins/endpoints/src/main/java/software/amazon/ai/mms/plugins/endpoint/Ping.java
  class Ping (line 16) | @Endpoint(
    method modelsLoaded (line 24) | private boolean modelsLoaded(Context ctx) {
    method validConfig (line 37) | private boolean validConfig(String svc) {
    method doGet (line 48) | @Override

FILE: run_circleci_tests.py
  function get_processed_job_sequence (line 71) | def get_processed_job_sequence(processed_job_name):
  function get_jobs_to_exec (line 86) | def get_jobs_to_exec(job_name):
  function get_jobs_steps (line 115) | def get_jobs_steps(steps, job_name):

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/Context.java
  type Context (line 21) | public interface Context {
    method getConfig (line 26) | Properties getConfig();
    method getModels (line 32) | Map<String, Model> getModels();

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/Model.java
  type Model (line 20) | public interface Model {
    method getModelName (line 25) | String getModelName();
    method getModelUrl (line 31) | String getModelUrl();
    method getModelHandler (line 37) | String getModelHandler();
    method getModelWorkers (line 43) | List<Worker> getModelWorkers();

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/ModelServerEndpoint.java
  class ModelServerEndpoint (line 23) | public abstract class ModelServerEndpoint {
    method doGet (line 31) | public void doGet(Request req, Response res, Context ctx) throws Model...
    method doPut (line 42) | public void doPut(Request req, Response res, Context ctx) throws Model...
    method doPost (line 53) | public void doPost(Request req, Response res, Context ctx) throws Mode...
    method doDelete (line 64) | public void doDelete(Request req, Response res, Context ctx) throws Mo...

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/ModelServerEndpointException.java
  class ModelServerEndpointException (line 15) | public class ModelServerEndpointException extends RuntimeException {
    method ModelServerEndpointException (line 16) | public ModelServerEndpointException(String err) {super(err);}
    method ModelServerEndpointException (line 17) | public ModelServerEndpointException(String err, Throwable t) {super(er...

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/Worker.java
  type Worker (line 14) | public interface Worker {
    method isRunning (line 19) | boolean isRunning();
    method getWorkerMemory (line 25) | long getWorkerMemory();

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/annotations/helpers/EndpointTypes.java
  type EndpointTypes (line 19) | public enum EndpointTypes {

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/http/Request.java
  type Request (line 24) | public interface Request {
    method getHeaderNames (line 29) | List<String> getHeaderNames();
    method getRequestURI (line 35) | String getRequestURI();
    method getParameterMap (line 41) | Map<String, List<String>> getParameterMap();
    method getParameter (line 48) | List<String> getParameter(String k);
    method getContentType (line 54) | String getContentType();
    method getInputStream (line 61) | InputStream getInputStream() throws IOException;

FILE: serving-sdk/src/main/java/software/amazon/ai/mms/servingsdk/http/Response.java
  type Response (line 22) | public interface Response {
    method setStatus (line 27) | void setStatus(int sc);
    method setStatus (line 34) | void setStatus(int sc, String phrase);
    method setHeader (line 41) | void setHeader(String k, String v);
    method addHeader (line 48) | void addHeader(String k, String v);
    method setContentType (line 54) | void setContentType(String ct);
    method getOutputStream (line 61) | OutputStream getOutputStream() throws IOException;

FILE: serving-sdk/src/test/java/software/amazon/ai/mms/servingsdk/ModelServerEndpointTest.java
  class ModelServerEndpointTest (line 38) | public class ModelServerEndpointTest {
    method beforeSuite (line 48) | @Before
    method test (line 81) | @Test
    method testEndpointInterface (line 88) | private void testEndpointInterface() throws IOException {
    method testEndpointAnnotation (line 162) | private void testEndpointAnnotation() {
    method testWorkerInterface (line 170) | private void testWorkerInterface(Worker w) {
    method testModelInterface (line 177) | private void testModelInterface(Model m) {
    method testContextInterface (line 186) | private void testContextInterface() {

FILE: setup.py
  function pypi_description (line 45) | def pypi_description():
  function detect_model_server_version (line 53) | def detect_model_server_version():
  class BuildFrontEnd (line 62) | class BuildFrontEnd(setuptools.command.build_py.build_py):
    method run (line 71) | def run(self):
  class BuildPy (line 99) | class BuildPy(setuptools.command.build_py.build_py):
    method run (line 104) | def run(self):
  class BuildPlugins (line 110) | class BuildPlugins(Command):
    method initialize_options (line 116) | def initialize_options(self):
    method finalize_options (line 119) | def finalize_options(self):
    method run (line 125) | def run(self):

FILE: tests/performance/agents/configuration.py
  function get (line 26) | def get(section, key, default=''):

FILE: tests/performance/agents/metrics/__init__.py
  class ProcessType (line 21) | class ProcessType(Enum):
  function get_metrics (line 90) | def get_metrics(server_process, child_processes, logger):

FILE: tests/performance/agents/metrics_collector.py
  function store_pid (line 43) | def store_pid(pid_file):
  function stop_process (line 51) | def stop_process(pid_file):
  function check_is_running (line 71) | def check_is_running(pid_file):
  function store_metrics_collector_pid (line 86) | def store_metrics_collector_pid():
  function stop_metrics_collector_process (line 91) | def stop_metrics_collector_process():
  function monitor_processes (line 98) | def monitor_processes(server_process, metrics, interval, socket):
  function start_metric_collection (line 124) | def start_metric_collection(server_process, metrics, interval, socket):
  function start_metric_collector_process (line 134) | def start_metric_collector_process():

FILE: tests/performance/agents/metrics_monitoring_inproc.py
  class Monitor (line 38) | class Monitor(monitoring.Monitoring):
    method __init__ (line 42) | def __init__(self):
  class ServerLocalClient (line 47) | class ServerLocalClient(monitoring.LocalClient):
    method __init__ (line 53) | def __init__(self, parent_log, label, config, engine=None):
    method connect (line 61) | def connect(self):
  class ServerLocalMonitor (line 89) | class ServerLocalMonitor(monitoring.LocalMonitor):
    method _calc_resource_stats (line 92) | def _calc_resource_stats(self, interval):

FILE: tests/performance/agents/metrics_monitoring_server.py
  function process_data (line 51) | def process_data(sock):
  function perf_server (line 79) | def perf_server():
  function send_message (line 109) | def send_message(socket_, message):
  function close_socket (line 117) | def close_socket(socket_):

FILE: tests/performance/agents/utils/process.py
  function find_procs_by_name (line 24) | def find_procs_by_name(name):
  function get_process_pid_from_file (line 39) | def get_process_pid_from_file(file_path):
  function get_child_processes (line 50) | def get_child_processes(process):
  function get_server_processes (line 58) | def get_server_processes(server_process_pid):
  function get_server_pidfile (line 70) | def get_server_pidfile(file):

FILE: tests/performance/run_performance_suite.py
  function get_artifacts_dir (line 41) | def get_artifacts_dir(ctx, param, value):
  function validate_env (line 52) | def validate_env(ctx, param, value):
  function run_test_suite (line 74) | def run_test_suite(artifacts_dir, test_dir, pattern, exclude_pattern,

FILE: tests/performance/runs/compare.py
  class CompareReportGenerator (line 36) | class CompareReportGenerator():
    method __init__ (line 38) | def __init__(self, path, env_name, local_run):
    method gen (line 48) | def gen(self):
  class CompareTestSuite (line 64) | class CompareTestSuite():
    method __init__ (line 74) | def __init__(self, name, hostname, t):
    method add_test_case (line 80) | def add_test_case(self, name, msg, type):
  function get_log_file (line 88) | def get_log_file(dir, sub_dir):
  function get_aggregate_val (line 94) | def get_aggregate_val(df, agg_func, col):
  function compare_values (line 105) | def compare_values(val1, val2, diff_percent, run_name1, run_name2):
  function compare_artifacts (line 135) | def compare_artifacts(dir1, dir2, run_name1, run_name2):

FILE: tests/performance/runs/context.py
  class ExecutionEnv (line 35) | class ExecutionEnv(object):
    method __init__ (line 40) | def __init__(self, agent, artifacts_dir, env, local_run, use=True, che...
    method __enter__ (line 50) | def __enter__(self):
    method open_report (line 58) | def open_report(file_path):
    method report_summary (line 64) | def report_summary(reporter, suite_name):
    method __exit__ (line 82) | def __exit__(self, type, value, traceback):

FILE: tests/performance/runs/junit.py
  class JunitConverter (line 28) | class JunitConverter():
    method __init__ (line 30) | def __init__(self, junit_xml, out_dir, report_name):
    method generate_junit_report (line 35) | def generate_junit_report(self):
  function pretty_text (line 42) | def pretty_text(data):
  function junit2array (line 50) | def junit2array(junit_xml):
  function junit2tabulate (line 66) | def junit2tabulate(junit_xml):

FILE: tests/performance/runs/storage.py
  class Storage (line 35) | class Storage():
    method __init__ (line 38) | def __init__(self, path, env_name):
    method get_dir_to_compare (line 43) | def get_dir_to_compare(self):
    method store_results (line 46) | def store_results(self):
    method get_latest (line 50) | def get_latest(names, env_name, exclude_name):
  class LocalStorage (line 70) | class LocalStorage(Storage):
    method get_dir_to_compare (line 75) | def get_dir_to_compare(self):
  class S3Storage (line 83) | class S3Storage(Storage):
    method get_dir_to_compare (line 86) | def get_dir_to_compare(self):
    method store_results (line 112) | def store_results(self):

FILE: tests/performance/runs/taurus/__init__.py
  function get_taurus_options (line 25) | def get_taurus_options(artifacts_dir, jmeter_path=None):
  function update_taurus_metric_files (line 38) | def update_taurus_metric_files(suite_artifacts_dir, test_file):

FILE: tests/performance/runs/taurus/reader.py
  function get_mon_metrics_list (line 23) | def get_mon_metrics_list(test_yaml_path):
  function get_compare_metric_list (line 37) | def get_compare_metric_list(dir, sub_dir):

FILE: tests/performance/runs/taurus/x2junit.py
  class X2Junit (line 24) | class X2Junit(object):
    method __init__ (line 29) | def __init__(self, name, artifacts_dir, junit_xml, timer, env_name):
    method __enter__ (line 37) | def __enter__(self):
    method __exit__ (line 40) | def __exit__(self, type, value, traceback):

FILE: tests/performance/utils/fs.py
  function get_sub_dirs (line 26) | def get_sub_dirs(dir, exclude_list=['comp_data'], include_pattern='*', e...

FILE: tests/performance/utils/pyshell.py
  function run_process (line 27) | def run_process(cmd, wait=True):

FILE: tests/performance/utils/timer.py
  class Timer (line 26) | class Timer(object):
    method __init__ (line 30) | def __init__(self, description):
    method __enter__ (line 33) | def __enter__(self):
    method __exit__ (line 37) | def __exit__(self, type, value, traceback):
    method diff (line 40) | def diff(self):
Condensed preview — 466 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (2,023K chars).
[
  {
    "path": ".circleci/README.md",
    "chars": 3212,
    "preview": "# Multi Model Server CircleCI build\nModel Server uses CircleCI for builds. This folder contains the config and scripts t"
  },
  {
    "path": ".circleci/config.yml",
    "chars": 4916,
    "preview": "version: 2.1\n\n\nexecutors:\n  py36:\n    docker:\n      - image: prashantsail/mms-build:python3.6\n    environment:\n      _JA"
  },
  {
    "path": ".circleci/images/Dockerfile.python2.7",
    "chars": 2803,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": ".circleci/images/Dockerfile.python3.6",
    "chars": 2298,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": ".circleci/scripts/linux_build.sh",
    "chars": 52,
    "preview": "#!/bin/bash\n\npython setup.py bdist_wheel --universal"
  },
  {
    "path": ".circleci/scripts/linux_test_api.sh",
    "chars": 2244,
    "preview": "#!/bin/bash\n\nMODEL_STORE_DIR='test/model_store'\n\nMMS_LOG_FILE_MANAGEMENT='mms_management.log'\nMMS_LOG_FILE_INFERENCE='mm"
  },
  {
    "path": ".circleci/scripts/linux_test_benchmark.sh",
    "chars": 429,
    "preview": "#!/bin/bash\n\n# Hack needed to make it work with existing benchmark.py\n# benchmark.py expects jmeter to be present at a v"
  },
  {
    "path": ".circleci/scripts/linux_test_modelarchiver.sh",
    "chars": 399,
    "preview": "#!/bin/bash\n\ncd model-archiver/\n\n# Lint test\npylint -rn --rcfile=./model_archiver/tests/pylintrc model_archiver/.\n\n# Exe"
  },
  {
    "path": ".circleci/scripts/linux_test_perf_regression.sh",
    "chars": 874,
    "preview": "#!/bin/bash\n\nmulti-model-server --start \\\n                   --models squeezenet=https://s3.amazonaws.com/model-server/m"
  },
  {
    "path": ".circleci/scripts/linux_test_python.sh",
    "chars": 171,
    "preview": "#!/bin/bash\n\n# Lint Test\npylint -rn --rcfile=./mms/tests/pylintrc mms/.\n\n# Execute python tests\npython -m pytest --cov-r"
  },
  {
    "path": ".coveragerc",
    "chars": 976,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "chars": 693,
    "preview": "Before or while filing an issue please feel free to join our [<img src='../docs/images/slack.png' width='20px' /> slack "
  },
  {
    "path": ".gitignore",
    "chars": 1377,
    "preview": ".gradle\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution "
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "LICENSE.txt",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "MANIFEST.in",
    "chars": 88,
    "preview": "include mms/frontend/model-server.jar\ninclude PyPiDescription.rst\ninclude mms/configs/*\n"
  },
  {
    "path": "PyPiDescription.rst",
    "chars": 2777,
    "preview": "Project Description\n===================\n\nMulti Model Server (MMS) is a flexible and easy to use tool for\nserving deep le"
  },
  {
    "path": "README.md",
    "chars": 10439,
    "preview": "Multi Model Server\n=======\n\n| ubuntu/python-2.7 | ubuntu/python-3.6 |\n|---------|---------|\n| ![Python3 Build Status](ht"
  },
  {
    "path": "_config.yml",
    "chars": 26,
    "preview": "theme: jekyll-theme-cayman"
  },
  {
    "path": "benchmarks/README.md",
    "chars": 8089,
    "preview": "# Multi Model Server Benchmarking\n\nThe benchmarks measure the performance of MMS on various models and benchmarks.  It s"
  },
  {
    "path": "benchmarks/benchmark.py",
    "chars": 19582,
    "preview": "#!/usr/bin/env python3\n\n# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the A"
  },
  {
    "path": "benchmarks/install_dependencies.sh",
    "chars": 3848,
    "preview": "#!/bin/bash\n\n# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache Licen"
  },
  {
    "path": "benchmarks/jmx/concurrentLoadPlan.jmx",
    "chars": 6311,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/concurrentScaleCalls.jmx",
    "chars": 12687,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/graphsGenerator.jmx",
    "chars": 3500,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/imageInputModelPlan.jmx",
    "chars": 12562,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/multipleModelsLoadPlan.jmx",
    "chars": 26683,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/pingPlan.jmx",
    "chars": 4191,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/jmx/textInputModelPlan.jmx",
    "chars": 12544,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<jmeterTestPlan version=\"1.2\" properties=\"4.0\" jmeter=\"4.0 r1823414\">\n  <hashTree"
  },
  {
    "path": "benchmarks/lstm_ip.json",
    "chars": 123,
    "preview": "[{\"input_sentence\": \"on the exchange floor as soon as ual stopped trading we <unk> for a panic said one top floor trader"
  },
  {
    "path": "benchmarks/mac_install_dependencies.sh",
    "chars": 1528,
    "preview": "#!/bin/bash\n\n# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache Licen"
  },
  {
    "path": "benchmarks/noop_ip.txt",
    "chars": 41,
    "preview": "\"[{\\\"input_sentence\\\": \\\"Hello World\\\"}]\""
  },
  {
    "path": "benchmarks/upload_results_to_s3.sh",
    "chars": 1041,
    "preview": "#!/usr/bin/env bash\n\n# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apac"
  },
  {
    "path": "ci/Dockerfile.python2.7",
    "chars": 8927,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "ci/Dockerfile.python3.6",
    "chars": 9399,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "ci/README.md",
    "chars": 1294,
    "preview": "# Model Server CI build\n\nModel Server us AWS codebuild for its CI build. This folder contains scripts that needed for AW"
  },
  {
    "path": "ci/buildspec.yml",
    "chars": 1392,
    "preview": "# Build Spec for AWS CodeBuild CI\n\nversion: 0.2\n\nphases:\n  install:\n    commands:\n      - apt-get update\n      - apt-get"
  },
  {
    "path": "ci/dockerd-entrypoint.sh",
    "chars": 438,
    "preview": "#!/bin/sh\nset -e\n\n/usr/local/bin/dockerd \\\n\t--host=unix:///var/run/docker.sock \\\n\t--host=tcp://127.0.0.1:2375 \\\n\t--stora"
  },
  {
    "path": "ci/m2-settings.xml",
    "chars": 812,
    "preview": "<settings>\n  <profiles>\n    <profile>\n      <id>securecentral</id>\n      <activation>\n        <activeByDefault>true</act"
  },
  {
    "path": "docker/Dockerfile.cpu",
    "chars": 1205,
    "preview": "FROM ubuntu:18.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND=noninteractive apt-get install"
  },
  {
    "path": "docker/Dockerfile.gpu",
    "chars": 1239,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND="
  },
  {
    "path": "docker/Dockerfile.nightly-cpu",
    "chars": 1204,
    "preview": "FROM ubuntu:18.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND=noninteractive apt-get install"
  },
  {
    "path": "docker/Dockerfile.nightly-gpu",
    "chars": 1238,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND="
  },
  {
    "path": "docker/README.md",
    "chars": 14548,
    "preview": "[//]: # \"All the references in this file should be actual links because this file would be used by docker hub. DO NOT us"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7",
    "chars": 1002,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu16.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND="
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.nvidia_cu92_ubuntu_16_04.py2_7.nightly",
    "chars": 1008,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu16.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND="
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6",
    "chars": 3719,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu16.04\n\nENV PYTHONUNBUFFERED TRUE\n\n# Install python3.6 and pip3\nENV PATH /usr/l"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.nvidia_cu92_ubuntu_16_04.py3_6.nightly",
    "chars": 3725,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu16.04\n\nENV PYTHONUNBUFFERED TRUE\n\n# Install python3.6 and pip3\nENV PATH /usr/l"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.ubuntu_16_04.py2_7",
    "chars": 972,
    "preview": "FROM ubuntu:16.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND=noninteractive apt-get install"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.ubuntu_16_04.py2_7.nightly",
    "chars": 978,
    "preview": "FROM ubuntu:16.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN apt-get update && \\\n    DEBIAN_FRONTEND=noninteractive apt-get install"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.ubuntu_16_04.py3_6",
    "chars": 3689,
    "preview": "FROM ubuntu:16.04\n\nENV PYTHONUNBUFFERED TRUE\n\n# Install python3.6 and pip3\nENV PATH /usr/local/bin:$PATH\nENV LANG C.UTF-"
  },
  {
    "path": "docker/advanced-dockerfiles/Dockerfile.base.ubuntu_16_04.py3_6.nightly",
    "chars": 3695,
    "preview": "FROM ubuntu:16.04\n\nENV PYTHONUNBUFFERED TRUE\n\n# Install python3.6 and pip3\nENV PATH /usr/local/bin:$PATH\nENV LANG C.UTF-"
  },
  {
    "path": "docker/advanced-dockerfiles/config.properties",
    "chars": 889,
    "preview": "# vmargs=-Xmx128m -XX:-UseLargePages -XX:+UseG1GC -XX:MaxMetaspaceSize=32M -XX:MaxDirectMemorySize=10m -XX:+ExitOnOutOfM"
  },
  {
    "path": "docker/advanced-dockerfiles/dockerd-entrypoint.sh",
    "chars": 187,
    "preview": "#!/bin/bash\nset -e\n\nif [[ \"$1\" = \"serve\" ]]; then\n    shift 1\n    multi-model-server --start --mms-config config.propert"
  },
  {
    "path": "docker/advanced_settings.md",
    "chars": 11781,
    "preview": "# Advanced Settings\n\n## Contents of this Document\n* [GPU Inference](advanced_settings.md#gpu-inference)\n* [Reference Com"
  },
  {
    "path": "docker/config.properties",
    "chars": 908,
    "preview": "vmargs=-Xmx128m -XX:-UseLargePages -XX:+UseG1GC -XX:MaxMetaspaceSize=32M -XX:MaxDirectMemorySize=10m -XX:+ExitOnOutOfMem"
  },
  {
    "path": "docker/dockerd-entrypoint.sh",
    "chars": 187,
    "preview": "#!/bin/bash\nset -e\n\nif [[ \"$1\" = \"serve\" ]]; then\n    shift 1\n    multi-model-server --start --mms-config config.propert"
  },
  {
    "path": "docs/README.md",
    "chars": 2284,
    "preview": "# Multi Model Server Documentation\n\n## Basic Features\n* [Serving Quick Start](../README.md#serve-a-model) - Basic server"
  },
  {
    "path": "docs/batch_inference_with_mms.md",
    "chars": 18790,
    "preview": "# Batch Inference with Model Server\n\n## Contents of this Document\n* [Introduction](#introduction)\n* [Batching example](#"
  },
  {
    "path": "docs/configuration.md",
    "chars": 9884,
    "preview": "# Advanced configuration\n\nOne of design goal of MMS 1.0 is easy to use. The default settings form MMS should be sufficie"
  },
  {
    "path": "docs/custom_service.md",
    "chars": 6331,
    "preview": "# Custom Service\n\n## Contents of this Document\n* [Introduction](#introduction)\n* [Requirements for custom service file]("
  },
  {
    "path": "docs/elastic_inference.md",
    "chars": 7939,
    "preview": "# Model Serving with Amazon Elastic Inference \n\n## Contents of this Document\n* [Introduction](#introduction)\n* [Custom S"
  },
  {
    "path": "docs/images/helpers/plugins_sdk_class_uml_diagrams.puml",
    "chars": 1447,
    "preview": "\n@startuml\nContext \"1\" *-- \"many\" Model : contains\nModel \"1\" *-- \"many\" Worker : contains\nEndpoint o-- EndpointTypes\n\nMo"
  },
  {
    "path": "docs/inference_api.md",
    "chars": 2147,
    "preview": "# Inference API\n\nInference API is listening on port 8080 and only accessible from localhost by default. To change the de"
  },
  {
    "path": "docs/install.md",
    "chars": 2938,
    "preview": "\n# Install MMS\n\n## Prerequisites\n\n* **Python**: Required. Multi Model Server (MMS) works with Python 2 or 3.  When insta"
  },
  {
    "path": "docs/logging.md",
    "chars": 5293,
    "preview": "# Logging on Multi Model Server\n\nIn this document we will go through logging mechanism in Multi Model Server. We will al"
  },
  {
    "path": "docs/management_api.md",
    "chars": 7817,
    "preview": "# Management API\n\nMMS provides a set of API allow user to manage models at runtime:\n1. [Register a model](#register-a-mo"
  },
  {
    "path": "docs/metrics.md",
    "chars": 8020,
    "preview": "# Metrics on Model Server\n\n## Contents of this Document\n* [Introduction](#introduction)\n* [System metrics](#system-metri"
  },
  {
    "path": "docs/migration.md",
    "chars": 5028,
    "preview": "# Migration from MMS 0.4\n\nMMS 1.0 is a major release that contains significant architecture improvement based on MMS 0.4"
  },
  {
    "path": "docs/mms_endpoint_plugins.md",
    "chars": 6330,
    "preview": "# Introduction \nIn this document, we will go over how to build and load custom endpoints for MMS. We will go over the pl"
  },
  {
    "path": "docs/mms_on_fargate.md",
    "chars": 14186,
    "preview": "# Serverless Inference with MMS on FARGATE\n\nThis is self-contained step by step guide that shows how to create launch an"
  },
  {
    "path": "docs/model_zoo.md",
    "chars": 28404,
    "preview": "# Model Zoo\n\nThis page lists model archives that are pre-trained and pre-packaged, ready to be served for inference with"
  },
  {
    "path": "docs/rest_api.md",
    "chars": 645,
    "preview": "# MMS REST API\n\nMMS use RESTful API for both inference and management calls. The API is compliance with [OpenAPI specifi"
  },
  {
    "path": "docs/server.md",
    "chars": 9035,
    "preview": "# Running the Model Server\n\n## Contents of this Document\n* [Overview](#overview)\n* [Technical Details](#technical-detail"
  },
  {
    "path": "examples/README.md",
    "chars": 840,
    "preview": "# MMS Examples\n\nThe following are examples on how to create and serve model archives with MMS.\n\n* Gluon Models\n    * [Al"
  },
  {
    "path": "examples/densenet_pytorch/Dockerfile",
    "chars": 124,
    "preview": "FROM awsdeeplearningteam/multi-model-server:base-cpu-py3.6\n\n# Add PyTorch\nRUN pip install --no-cache-dir torch torchvisi"
  },
  {
    "path": "examples/densenet_pytorch/README.md",
    "chars": 1464,
    "preview": "# PyTorch serving  \nThis example shows how to serve PyTorch trained models for flower species recognition..\nThe custom h"
  },
  {
    "path": "examples/densenet_pytorch/densenet_service.py",
    "chars": 3636,
    "preview": "import os\nimport io\nimport json\nimport numpy as np\nfrom PIL import Image\nimport torch\nfrom torch.autograd import Variabl"
  },
  {
    "path": "examples/densenet_pytorch/index_to_name.json",
    "chars": 2218,
    "preview": "{\"21\": \"fire lily\", \"3\": \"canterbury bells\", \"45\": \"bolero deep blue\", \"1\": \"pink primrose\", \"34\": \"mexican aster\", \"27\""
  },
  {
    "path": "examples/gluon_alexnet/README.md",
    "chars": 15506,
    "preview": "# Loading and serving Gluon models on Multi Model Server (MMS)\nMulti Model Server (MMS) supports loading and serving MXN"
  },
  {
    "path": "examples/gluon_alexnet/gluon_hybrid_alexnet.py",
    "chars": 3328,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/gluon_alexnet/gluon_imperative_alexnet.py",
    "chars": 3410,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/gluon_alexnet/gluon_pretrained_alexnet.py",
    "chars": 1785,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/gluon_alexnet/signature.json",
    "chars": 256,
    "preview": "{\n  \"inputs\": [\n    {\n      \"data_name\": \"data\",\n      \"data_shape\": [0, 3, 224, 224]\n    }\n  ],\n  \"input_type\": \"image/"
  },
  {
    "path": "examples/gluon_alexnet/synset.txt",
    "chars": 31675,
    "preview": "n01440764 tench, Tinca tinca\nn01443537 goldfish, Carassius auratus\nn01484850 great white shark, white shark, man-eater, "
  },
  {
    "path": "examples/gluon_character_cnn/README.md",
    "chars": 7002,
    "preview": "# Character-level CNN Model in Gluon trained using Amazon Product Dataset\n\nIn this example, we show how to create a serv"
  },
  {
    "path": "examples/gluon_character_cnn/gluon_crepe.py",
    "chars": 5486,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/gluon_character_cnn/signature.json",
    "chars": 252,
    "preview": "{\n  \"inputs\": [\n    {\n      \"data_name\": \"data\",\n      \"data_shape\": [1,1014]\n    }\n  ],\n  \"input_type\": \"application/js"
  },
  {
    "path": "examples/gluon_character_cnn/synset.txt",
    "chars": 126,
    "preview": "Home_and_Kitchen\nBooks\nCDs_and_Vinyl\nMovies_and_TV\nCell_Phones_and_Accessories\nSports_and_Outdoors\nClothing_Shoes_and_Je"
  },
  {
    "path": "examples/lstm_ptb/README.md",
    "chars": 6418,
    "preview": "# Sequence to Sequence inference with LSTM network trained on PenTreeBank data set\n\nIn this example, we show how to crea"
  },
  {
    "path": "examples/lstm_ptb/lstm_ptb_service.py",
    "chars": 5387,
    "preview": "import json\nimport os\n\nimport mxnet as mx\n\nfrom mxnet_utils import nlp\nfrom model_handler import ModelHandler\n\n\nclass MX"
  },
  {
    "path": "examples/metrics_cloudwatch/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "examples/metrics_cloudwatch/metric_push_example.py",
    "chars": 2236,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/gluon_base_service.py",
    "chars": 4806,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/model_handler.py",
    "chars": 3314,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_model_service.py",
    "chars": 7019,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_utils/__init__.py",
    "chars": 590,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_utils/image.py",
    "chars": 6467,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_utils/ndarray.py",
    "chars": 1351,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_utils/nlp.py",
    "chars": 3435,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_vision_batching.py",
    "chars": 7606,
    "preview": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/model_service_template/mxnet_vision_service.py",
    "chars": 2943,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/mxnet_vision/README.md",
    "chars": 7711,
    "preview": "# MXNet Vision Service\n\nIn this example, we show how to use a pre-trained MXNet model to performing real time Image Clas"
  },
  {
    "path": "examples/sockeye_translate/Dockerfile",
    "chars": 1156,
    "preview": "FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04\n\nENV PYTHONUNBUFFERED TRUE\n\nRUN useradd -m model-server && \\\n    mkdir -"
  },
  {
    "path": "examples/sockeye_translate/README.md",
    "chars": 2208,
    "preview": "# sockeye-serving\nThis example shows how to serve Sockeye models for machine translation.\nThe custom handler is implemen"
  },
  {
    "path": "examples/sockeye_translate/config/config.properties",
    "chars": 716,
    "preview": "vmargs=-Xmx128m -XX:-UseLargePages -XX:+UseG1GC -XX:MaxMetaspaceSize=32M -XX:MaxDirectMemorySize=10m -XX:+ExitOnOutOfMem"
  },
  {
    "path": "examples/sockeye_translate/model_handler.py",
    "chars": 2748,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "examples/sockeye_translate/preprocessor.py",
    "chars": 4944,
    "preview": "import html\nimport logging\nimport os\nimport subprocess\nfrom html.entities import html5, name2codepoint\n\nimport regex as "
  },
  {
    "path": "examples/sockeye_translate/sockeye_service.py",
    "chars": 10747,
    "preview": "import logging\nimport os\nimport re\nfrom contextlib import ExitStack\nfrom sockeye import arguments\nfrom sockeye import co"
  },
  {
    "path": "examples/ssd/README.md",
    "chars": 9792,
    "preview": "# Single Shot Multi Object Detection Inference Service\n\nIn this example, we show how to use a pre-trained Single Shot Mu"
  },
  {
    "path": "examples/ssd/example_outputs.md",
    "chars": 1937,
    "preview": "# SSD Example Outputs\n\n### Dog Beach\n\n![dog beach](https://farm9.staticflickr.com/8184/8081332083_3a5c242b8b_z_d.jpg)\n``"
  },
  {
    "path": "examples/ssd/ssd_service.py",
    "chars": 4357,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/.gitignore",
    "chars": 41,
    "preview": ".gradle\n.DS_Store\n.idea\n*.iml\nbuild\nlibs\n"
  },
  {
    "path": "frontend/README.md",
    "chars": 770,
    "preview": "Model Server REST API endpoint\n==============================\n\n## Quick Start\n\n### Building frontend\n\nYou can build fron"
  },
  {
    "path": "frontend/build.gradle",
    "chars": 1279,
    "preview": "allprojects {\n    version = '1.0'\n\n    repositories {\n        jcenter()\n    }\n\n    apply plugin: 'idea'\n    idea {\n     "
  },
  {
    "path": "frontend/cts/build.gradle",
    "chars": 416,
    "preview": "dependencies {\n    compile project(\":server\")\n}\n\njar {\n    manifest {\n        attributes 'Main-Class': 'com.amazonaws.ml"
  },
  {
    "path": "frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/Cts.java",
    "chars": 6767,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/HttpClient.java",
    "chars": 8953,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/cts/src/main/java/com/amazonaws/ml/mms/cts/ModelInfo.java",
    "chars": 7532,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/cts/src/main/resources/log4j2.xml",
    "chars": 773,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Configuration>\n    <Appenders>\n        <Console name=\"STDOUT\" target=\"SYSTEM_OUT"
  },
  {
    "path": "frontend/gradle/wrapper/gradle-wrapper.properties",
    "chars": 230,
    "preview": "#Thu Apr 13 16:20:04 PDT 2017\ndistributionBase=GRADLE_USER_HOME\ndistributionPath=wrapper/dists\nzipStoreBase=GRADLE_USER_"
  },
  {
    "path": "frontend/gradle.properties",
    "chars": 256,
    "preview": "org.gradle.daemon=true\norg.gradle.jvmargs=-Xmx1024M\nnetty_version=4.1.109.Final\nslf4j_api_version=1.7.32\nslf4j_log4j_ver"
  },
  {
    "path": "frontend/gradlew",
    "chars": 5242,
    "preview": "#!/usr/bin/env bash\n\n##############################################################################\n##\n##  Gradle start "
  },
  {
    "path": "frontend/gradlew.bat",
    "chars": 2260,
    "preview": "@if \"%DEBUG%\" == \"\" @echo off\r\n@rem ##########################################################################\r\n@rem\r\n@r"
  },
  {
    "path": "frontend/modelarchive/build.gradle",
    "chars": 367,
    "preview": "dependencies {\n    compile \"commons-io:commons-io:2.6\"\n    compile \"org.slf4j:slf4j-api:${slf4j_api_version}\"\n    compil"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/DownloadModelException.java",
    "chars": 1763,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/Hex.java",
    "chars": 1530,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/InvalidModelException.java",
    "chars": 1758,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/LegacyManifest.java",
    "chars": 5763,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/Manifest.java",
    "chars": 6066,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelArchive.java",
    "chars": 12867,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelException.java",
    "chars": 1718,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ModelNotFoundException.java",
    "chars": 1763,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/main/java/com/amazonaws/ml/mms/archive/ZipUtils.java",
    "chars": 3234,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/CoverageTest.java",
    "chars": 896,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/Exporter.java",
    "chars": 12076,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/archive/ModelArchiveTest.java",
    "chars": 2876,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/test/java/com/amazonaws/ml/mms/test/TestHelper.java",
    "chars": 5313,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/custom-return-code/MAR-INF/MANIFEST.json",
    "chars": 441,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/custom-return-code/service.py",
    "chars": 853,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/error_batch/MAR-INF/MANIFEST.json",
    "chars": 419,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"invalid model\",\n  \"modelServerVer"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/error_batch/service.py",
    "chars": 854,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/init-error/MAR-INF/MANIFEST.json",
    "chars": 445,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"init error model\",\n  \"modelServer"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/init-error/invalid_service.py",
    "chars": 720,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/invalid/MAR-INF/MANIFEST.json",
    "chars": 421,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"invalid model\",\n  \"modelServerVer"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/invalid/invalid_service.py",
    "chars": 704,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/loading-memory-error/MAR-INF/MANIFEST.json",
    "chars": 418,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/loading-memory-error/service.py",
    "chars": 816,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/logging/MAR-INF/MANIFEST.json",
    "chars": 418,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"logging v1.0\",\n  \"modelServerVers"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/logging/service.py",
    "chars": 2734,
    "preview": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-no-manifest/service.py",
    "chars": 2724,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v0.1/MANIFEST.json",
    "chars": 576,
    "preview": "{\n\t\"Created-By\": {\n\t\t\"Author\": \"MXNet SDK team\",\n\t\t\"Author-Email\": \"noreply@amazon.com\"\n\t},\n    \"Engine\": {\n        \"MXN"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v0.1/noop_service.py",
    "chars": 872,
    "preview": "# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v0.1/signature.json",
    "chars": 331,
    "preview": "{\n  \"inputs\": [\n    {\n      \"data_name\": \"data\",\n      \"shape\": [\n        0,\n        3,\n        224,\n        224\n      ]"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v1.0/MAR-INF/MANIFEST.json",
    "chars": 411,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v1.0/service.py",
    "chars": 3935,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v1.0-config-tests/MAR-INF/MANIFEST.json",
    "chars": 418,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/noop-v1.0-config-tests/service.py",
    "chars": 3905,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/prediction-memory-error/MAR-INF/MANIFEST.json",
    "chars": 423,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/prediction-memory-error/service.py",
    "chars": 904,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/respheader-test/MAR-INF/MANIFEST.json",
    "chars": 425,
    "preview": "{\n  \"specificationVersion\": \"1.0\",\n  \"implementationVersion\": \"1.0\",\n  \"description\": \"noop v1.0\",\n  \"modelServerVersion"
  },
  {
    "path": "frontend/modelarchive/src/test/resources/models/respheader-test/service.py",
    "chars": 3363,
    "preview": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n# Licensed under the Apache License, Version 2"
  },
  {
    "path": "frontend/server/build.gradle",
    "chars": 1010,
    "preview": "dependencies {\n    compile \"io.netty:netty-all:${netty_version}\"\n    compile project(\":modelarchive\")\n    compile \"commo"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/ModelServer.java",
    "chars": 15404,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/ServerInitializer.java",
    "chars": 3795,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ApiDescriptionRequestHandler.java",
    "chars": 1657,
    "preview": "package com.amazonaws.ml.mms.http;\n\nimport com.amazonaws.ml.mms.archive.ModelException;\nimport com.amazonaws.ml.mms.open"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/BadRequestException.java",
    "chars": 1633,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ConflictStatusException.java",
    "chars": 1653,
    "preview": "/*\n * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/DescribeModelResponse.java",
    "chars": 5508,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ErrorResponse.java",
    "chars": 1200,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/HttpRequestHandler.java",
    "chars": 3772,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/HttpRequestHandlerChain.java",
    "chars": 5814,
    "preview": "package com.amazonaws.ml.mms.http;\n\nimport com.amazonaws.ml.mms.archive.ModelException;\nimport com.amazonaws.ml.mms.arch"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/InferenceRequestHandler.java",
    "chars": 8838,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/InternalServerException.java",
    "chars": 1763,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/InvalidPluginException.java",
    "chars": 1326,
    "preview": "/*\n * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/InvalidRequestHandler.java",
    "chars": 633,
    "preview": "package com.amazonaws.ml.mms.http;\n\nimport com.amazonaws.ml.mms.archive.ModelException;\nimport io.netty.channel.ChannelH"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ListModelsResponse.java",
    "chars": 1976,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ManagementRequestHandler.java",
    "chars": 14663,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/MethodNotAllowedException.java",
    "chars": 978,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/RequestTimeoutException.java",
    "chars": 1767,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ResourceNotFoundException.java",
    "chars": 978,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/ServiceUnavailableException.java",
    "chars": 480,
    "preview": "package com.amazonaws.ml.mms.http;\n\npublic class ServiceUnavailableException extends RuntimeException {\n\n    static fina"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/Session.java",
    "chars": 1745,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/StatusResponse.java",
    "chars": 924,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/http/messages/RegisterModelRequest.java",
    "chars": 3956,
    "preview": "/*\n * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/Dimension.java",
    "chars": 1206,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/Metric.java",
    "chars": 4717,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/MetricCollector.java",
    "chars": 6219,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/metrics/MetricManager.java",
    "chars": 1734,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Encoding.java",
    "chars": 1519,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Info.java",
    "chars": 1425,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/MediaType.java",
    "chars": 1693,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/OpenApi.java",
    "chars": 1417,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/OpenApiUtils.java",
    "chars": 24401,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Operation.java",
    "chars": 2870,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Parameter.java",
    "chars": 2515,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/Path.java",
    "chars": 2060,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/PathParameter.java",
    "chars": 1124,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/QueryParameter.java",
    "chars": 1478,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  },
  {
    "path": "frontend/server/src/main/java/com/amazonaws/ml/mms/openapi/RequestBody.java",
    "chars": 1543,
    "preview": "/*\n * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n *\n * Licensed under the Apache License, V"
  }
]

// ... and 266 more files (download for full content)

About this extraction

This page contains the full source code of the awslabs/mxnet-model-server GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 466 files (1.8 MB), approximately 457.3k tokens, and a symbol index with 1721 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!