[
  {
    "path": "README.md",
    "content": "# Client for License Plate Identification in Real-time on AWS w/ Cortex [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://robertlucian.mit-license.org)\n\n**READ THIS: This is a client for 3 (YOLOv3, CRAFT text detector, CRNN text recognizer) [cortex](https://github.com/cortexlabs/cortex)-deployed ML models. This client only works in conjunction with [these cortex APIs](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/license-plate-reader).** \n\n![Imgur](https://i.imgur.com/jgkJB59.gif)\n\n*- The above GIF was taken from [this video](https://www.youtube.com/watch?v=gsYEZtecXlA) of whose predictions were computed on the fly with cortex/AWS -*\n\n## Description\n\nThis app which uses the deployed cortex APIs as a PaaS captures the frames from a video camera, sends them for inferencing to the cortex APIs, recombines them after the responses are received and then the detections/recognitions are overlayed on the output stream. This is done on the car's dashcam (composed of the Raspberry Pi + GSM module) in real-time. Access to the internet is provided through the GSM module's 4G connection. \n\nThe app must be configured to use the API endpoints as shown when calling `cortex get yolov3` and `cortex get crnn`. Checkout how the APIs are defined in [their repository](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/license-plate-reader).\n\nThe app also saves a `csv` file containing the dates and GPS coordinates of each identified license plate.\n\n### Latency\n\nThe observable latency between capturing the frame and broadcasting the predictions in the browser (with all the inference stuff going on) takes about *0.5-1.0 seconds* depending on:\n\n* How many replicas are assigned for each API.\n* Internet connection bandwidth and latency.\n* Broadcast buffer size. To get a smoother stream, use a higher buffer size (*10-30*) or if you want the stream to be displayed as quickly as possible but with possible dropped frames, go with lower values (*<10*).\n\nTo learn more about how the actual device was constructed, check out [this](https://towardsdatascience.com/i-built-a-diy-license-plate-reader-with-a-raspberry-pi-and-machine-learning-7e428d3c7401) article.\n\n## Target Machine\n\nTarget machine **1**: Raspberry Pi.\n\nTarget machine **2**: Any x86 machine.\n\n---\n\nThe app's primary target machine is the *Raspberry Pi* (3/3B+/4) - the Raspberry Pi is a small embedded computer system that got adopted by many hobbyists and institutions all around the world as the de-facto choice for hardware/software experiments.\n\nUnfortunately for the Raspberry Pi, the $35 pocket-sized computer, doesn't have enough oomph (far from it) to do any inference: not only it doesn't have enough memory to load a model, but should it have enough RAM, it would still take dozens of minutes just to get a single inference. Let alone run inferences at 30 FPS, with a number of inferences for each frame.\n\nThe app is built to be used with a *Pi Camera* alongside the *Raspberry Pi*. The details on how to build such a system are found [here](#creating-your-own-device).\n\nSince many developers don't have a Raspberry Pi laying around or are just purely interested in seeing results right away, the app can also be configured to take in a video file and treat it exactly as if it was a camera.\n\n## Dependencies\n\nFor either target machine, the minimum version of Python has to be `3.6.x`.\n\n#### For the Raspberry Pi\n\n```bash\npip3 install --user -r deps/requirements_rpi.txt\nsudo apt-get update\n# dependencies for opencv and pandas package\nsudo apt-get install --no-install-recommends $(cat deps/requirements_dpkg_rpi.txt)\n```\n\n#### For x86 Machine\n\n```bash\npip3 install -r deps/requirements_base.txt\n```\n\n## Configuring\n\nThe configuration file can be in this form\n```jsonc\n{\n    \"video_source\": {\n        // \"file\" for reading from file or \"camera\" for pi camera\n        \"type\": \"file\",\n        // video file to read from; applicable just for \"file\" type\n        \"input\": \"airport_ride_480p.mp4\",\n        // scaling for the video file; applicable just for \"file\" type\n        \"scale_video\": 1.0,\n        // how many frames to skip on the video file; applicable just for \"file\" type\n        \"frames_to_skip\": 0,\n        // framerate\n        \"framerate\": 30\n        // camera sensor mode; applicable just for \"camera\" type\n        // \"sensor_mode\": 5\n        // where to save camera's output; applicable just for \"camera\" type\n        // \"output_file\": \"recording.h264\"\n        // resolution of the input video; applicable just for the \"camera\" type\n        // \"resolution\": [480, 270]\n    },\n    \"broadcaster\": {\n        // when broadcasting, a buffer is required to provide framerate fluidity; measured in frames\n        \"target_buffer_size\": 10,\n        // how much the size of the buffer can variate +-\n        \"max_buffer_size_variation\": 5,\n        // the maximum variation of the fps when extracting frames from the queue\n        \"max_fps_variation\": 15,\n        // target fps - must match the camera/file's framerate\n        \"target_fps\": 30,\n        // address to bind the web server to\n        \"serve_address\": [\"0.0.0.0\", 8000]\n    },\n    \"inferencing_worker\": {\n        // YOLOv3's input size of the image in pixels (must consider the existing model)\n        \"yolov3_input_size_px\": 416,\n        // when drawing the bounding boxes, use a higher res image to draw boxes more precisely\n        // (this way text is more readable)\n        \"bounding_boxes_upscale_px\": 640,\n        // object detection accuracy threshold in percentages with range (0, 1)\n        \"yolov3_obj_thresh\": 0.8,\n        // the jpeg quality of the image for the CRAFT/CRNN models in percentanges;\n        // these models receive the cropped images of each detected license plate\n        \"crnn_quality\": 98,\n        // broadcast quality - aim for a lower value since this stream doesn't influence the predictions; measured in percentages\n        \"broadcast_quality\": 90,\n        // connection timeout for both API endpoints measured in seconds\n        \"timeout\": 1.20,\n        // YOLOv3 API endpoint\n        \"api_endpoint_yolov3\": \"http://a23893c574c0511ea9f430a8bed50c69-1100298247.eu-central-1.elb.amazonaws.com/yolov3\",\n        // CRNN API endpoint\n        // Can be set to \"\" value to turn off the recognition inference\n        // By turning it off, the latency is reduced and the output video appears smoother\n        \"api_endpoint_crnn\": \"http://a23893c574c0511ea9f430a8bed50c69-1100298247.eu-central-1.elb.amazonaws.com/crnn\"\n    },\n    \"inferencing_pool\": {\n        // number of workers to do inferencing (YOLOv3 + CRAFT + CRNN)\n        // depending on the source's framerate, a balance must be achieved\n        \"workers\": 24,\n        // pick the nth frame from the input stream\n        // if the input stream runs at 30 fps, then setting this to 2 would act\n        // as if the input stream runs at 30/2=15 fps\n        // ideally, you have a high fps camera (90-180) and you only pick every 3rd-6th frame\n        \"pick_every_nth_frame\": 1\n    },\n    \"flusher\": {\n        // if there are more than this given number of frames in the input stream's buffer, flush them\n        // it's useful if the inference workers (due to a number of reasons) can't keep up with the input flow\n        // also keeps the current broadcasted stream up-to-date with the reality\n        \"frame_count_threshold\": 5\n    },\n    \"gps\": {\n        // set to false when using a video file to read the stream from\n        // set to true when you have a GPS connected to your system\n        \"use_gps\": false,\n        // port to write to to activate the GPS (built for EC25-E modules)\n        \"write_port\": \"/dev/ttyUSB2\",\n        // port to read from in NMEA standard\n        \"read_port\": \"/dev/ttyUSB1\",\n        // baudrate as measured in bits/s\n        \"baudrate\": 115200\n    },\n    \"general\": {\n        // to which IP the requests module is bound to\n        // useful if you only want to route the traffic through a specific interface\n        \"bind_ip\": \"0.0.0.0\",\n        // where to save the csv data containing the date, the predicted license plate number and GPS data\n        // can be an empty string, in which case, nothing is stored\n        \"saved_data\": \"saved_data.csv\"\n    }\n}\n```\n\nBe aware that for having a functional application, the minimum amount of things that have to be adjusted in the config file are:\n\n1. The input file in case you are using a video to feed the application with. You can download the following `mp4` video file to use as input. Download it by running `wget -O airport_ride_480p.mp4 \"https://www.dropbox.com/s/q9j57y5k95wg2zt/airport_ride_480p.mp4?dl=0\"`\n1. Both API endpoints from your cortex APIs. Use `cortex get your-api-name-here` command to get those.\n\n## Running It\n\nMake sure both APIs are already running in the cluster. Launching it the first time might raise some timeout exceptions, but let it run for a few moments. If there's enough compute capacity, you'll start getting `200` response codes.\n\nRun it like\n```bash\npython app.py -c config.json\n```\n\nOnce it's running, you can head off to its browser page to see the live broadcast with its predictions overlayed on top.\n\nTo save the broascasted MJPEG stream, you can run the following command (check the broadcast serve address in the `config.json` file)\n\n```bash\nPORT=8000\nFRAMERATE=30\nffmpeg -i http://localhost:PORT/stream.mjpg -an -vcodec libx264 -r FRAMERATE saved_video.h264\n```\n\nTo terminate the app, press `CTRL-C` and wait a bit.\n\n## Creating Your Own Device\n\n![Imgur](https://i.imgur.com/MvDAXWU.jpg)\n\nTo create your own Raspberry Pi-powered device to record and display the predictions in real time in your car, you're gonna need the following things:\n1. A Raspberry Pi - preferably a 4, because that one has more oomph.\n1. A Pi Camera - doesn't matter which version of it.\n1. A good buck converter to step-down from 12V down to 5V - aim for 4-5 amps. You can use a SBEC/UBEC/BEC regulators - they are easy to find and cheap.\n1. A power outlet for the car's cigarette port to get the 12V DC.\n1. 4G/GPS shield to host a GSM module - in this project, an EC25-E module has been used. You will also need antennas.\n1. A 3D-printed support to hold the electronics and be able to hold it against the rear mirror or dashboard. Must be built to accomodate to your own car.\n\nWithout convoluting this README too much:\n\n* Here are the [STLs/SLDPRTs/Renders](https://www.dropbox.com/sh/fw16vy1okrp606y/AAAwkoWXODmoaOP4yR-z4T8Va?dl=0) to the car's 3D printed support.\n* Here's an [article](https://towardsdatascience.com/i-built-a-diy-license-plate-reader-with-a-raspberry-pi-and-machine-learning-7e428d3c7401) that talks about this in full detail.\n"
  },
  {
    "path": "app.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nimport signal, os, time, json, queue, socket, click, cv2, pandas as pd\nimport multiprocessing as mp\nimport threading as td\n\nimport logging\n\nlogger = logging.getLogger()\nstream_handler = logging.StreamHandler()\nstream_handler.setLevel(logging.INFO)\nstream_format = logging.Formatter(\n    \"%(asctime)s - %(name)s - %(threadName)s - %(levelname)s - %(message)s\"\n)\nstream_handler.setFormatter(stream_format)\nlogger.addHandler(stream_handler)\nlogger.setLevel(logging.DEBUG)\n\ndisable_loggers = [\"urllib3.connectionpool\"]\nfor name, logger in logging.root.manager.loggerDict.items():\n    if name in disable_loggers:\n        logger.disabled = True\n\nfrom gps import ReadGPSData\nfrom workers import BroadcastReassembled, InferenceWorker, Flusher, session\nfrom utils.image import resize_image, image_to_jpeg_bytes\nfrom utils.queue import MPQueue\nfrom requests_toolbelt.adapters.source import SourceAddressAdapter\n\n\nclass GracefullKiller:\n    \"\"\"\n    For killing the app gracefully.\n    \"\"\"\n\n    kill_now = False\n\n    def __init__(self):\n        signal.signal(signal.SIGINT, self.exit_gracefully)\n        signal.signal(signal.SIGTERM, self.exit_gracefully)\n\n    def exit_gracefully(self, signum, frame):\n        self.kill_now = True\n\n\nclass WorkerPool(mp.Process):\n    \"\"\"\n    Pool of threads running in a different process.\n    \"\"\"\n\n    def __init__(self, name, worker, pool_size, *args, **kwargs):\n        \"\"\"\n        name - Name of the process.\n        worker - Derived class of thread to execute.\n        pool_size - Number of workers to have.\n        \"\"\"\n        super(WorkerPool, self).__init__(name=name)\n        self.event_stopper = mp.Event()\n        self.Worker = worker\n        self.pool_size = pool_size\n        self.args = args\n        self.kwargs = kwargs\n\n    def run(self):\n        logger.info(\"spawning workers on separate process\")\n        pool = [\n            self.Worker(\n                self.event_stopper,\n                *self.args,\n                **self.kwargs,\n                name=\"{}-Worker-{}\".format(self.name, i),\n            )\n            for i in range(self.pool_size)\n        ]\n        [worker.start() for worker in pool]\n        while not self.event_stopper.is_set():\n            time.sleep(0.001)\n        logger.info(\"stoppping workers on separate process\")\n        [worker.join() for worker in pool]\n\n    def stop(self):\n        self.event_stopper.set()\n\n\nclass DistributeFramesAndInfer:\n    \"\"\"\n    Custom output class primarly built for the PiCamera class.\n    Has 3 process-safe queues: in_queue for the incoming frames from the source,\n    bc_queue for the frames with the predicted overlays heading off to the broadcaster,\n    predicts_queue for the predictions to be written off to the disk.\n    \"\"\"\n\n    def __init__(self, pool_cfg, worker_cfg):\n        \"\"\"\n        pool_cfg - Configuration dictionary for the pool manager.\n        worker_cfg - Configuration dictionary for the pool workers.\n        \"\"\"\n        self.frame_num = 0\n        self.in_queue = MPQueue()\n        self.bc_queue = MPQueue()\n        self.predicts_queue = MPQueue()\n        for key, value in pool_cfg.items():\n            setattr(self, key, value)\n        self.pool = WorkerPool(\n            \"InferencePool\",\n            InferenceWorker,\n            self.workers,\n            self.in_queue,\n            self.bc_queue,\n            self.predicts_queue,\n            worker_cfg,\n        )\n        self.pool.start()\n\n    def write(self, buf):\n        \"\"\"\n        Mandatory custom output method for the PiCamera class.\n        buf - Frame as a bytes object.\n        \"\"\"\n        if buf.startswith(b\"\\xff\\xd8\"):\n            # start of new frame; close the old one (if any) and\n            if self.frame_num % self.pick_every_nth_frame == 0:\n                self.in_queue.put({\"frame_num\": self.frame_num, \"jpeg\": buf})\n            self.frame_num += 1\n\n    def stop(self):\n        \"\"\"\n        Stop all workers and the process altogether.\n        \"\"\"\n        self.pool.stop()\n        self.pool.join()\n        qs = [self.in_queue, self.bc_queue]\n        [q.cancel_join_thread() for q in qs]\n\n    def get_queues(self):\n        \"\"\"\n        Retrieve all queues.\n        \"\"\"\n        return self.in_queue, self.bc_queue, self.predicts_queue\n\n\n@click.command(\n    help=(\n        \"Identify license plates from a given video source\"\n        \" while outsourcing the predictions using REST API endpoints.\"\n    )\n)\n@click.option(\"--config\", \"-c\", required=True, type=str)\ndef main(config):\n    killer = GracefullKiller()\n\n    # open config file\n    try:\n        file = open(config)\n        cfg = json.load(file)\n        file.close()\n    except Exception as error:\n        logger.critical(str(error), exc_info=1)\n        return\n\n    # give meaningful names to each sub config\n    source_cfg = cfg[\"video_source\"]\n    broadcast_cfg = cfg[\"broadcaster\"]\n    pool_cfg = cfg[\"inferencing_pool\"]\n    worker_cfg = cfg[\"inferencing_worker\"]\n    flusher_cfg = cfg[\"flusher\"]\n    gps_cfg = cfg[\"gps\"]\n    gen_cfg = cfg[\"general\"]\n\n    # bind requests module to use a given network interface\n    try:\n        socket.inet_aton(gen_cfg[\"bind_ip\"])\n        session.mount(\"http://\", SourceAddressAdapter(gen_cfg[\"bind_ip\"]))\n        logger.info(\"binding requests module to {} IP\".format(gen_cfg[\"bind_ip\"]))\n    except OSError as e:\n        logger.error(\"bind IP is invalid, resorting to default interface\", exc_info=True)\n\n    # start polling the GPS\n    if gps_cfg[\"use_gps\"]:\n        wport = gps_cfg[\"write_port\"]\n        rport = gps_cfg[\"read_port\"]\n        br = gps_cfg[\"baudrate\"]\n        gps = ReadGPSData(wport, rport, br)\n        gps.start()\n    else:\n        gps = None\n\n    # workers on a separate process to run inference on the data\n    logger.info(\"initializing pool w/ \" + str(pool_cfg[\"workers\"]) + \" workers\")\n    output = DistributeFramesAndInfer(pool_cfg, worker_cfg)\n    frames_queue, bc_queue, predicts_queue = output.get_queues()\n    logger.info(\"initialized worker pool\")\n\n    # a single worker in a separate process to reassemble the data\n    reassembler = BroadcastReassembled(bc_queue, broadcast_cfg, name=\"BroadcastReassembled\")\n    reassembler.start()\n\n    # a single thread to flush the producing queue\n    # when there are too many frames in the pipe\n    flusher = Flusher(frames_queue, threshold=flusher_cfg[\"frame_count_threshold\"], name=\"Flusher\")\n    flusher.start()\n\n    # data aggregator to write things to disk\n    def results_writer():\n        if len(gen_cfg[\"saved_data\"]) > 0:\n            df = pd.DataFrame(columns=[\"Date\", \"License Plate\", \"Coordinates\"])\n            while not killer.kill_now:\n                time.sleep(0.01)\n                try:\n                    data = predicts_queue.get_nowait()\n                except queue.Empty:\n                    continue\n                predicts = data[\"predicts\"]\n                date = data[\"date\"]\n                for lp in predicts:\n                    if len(lp) > 0:\n                        lp = \" \".join(lp)\n                        entry = {\"Date\": date, \"License Plate\": lp, \"Coordinates\": \"\"}\n                        if gps:\n                            entry[\"Coordinates\"] = \"{}, {}\".format(\n                                gps.latitude, gps.longitude\n                            ).upper()\n                        df = df.append(entry, ignore_index=True)\n\n            logger.info(\"dumping results to csv file {}\".format(gen_cfg[\"saved_data\"]))\n            if os.path.isfile(gen_cfg[\"saved_data\"]):\n                header = False\n            else:\n                header = True\n            with open(gen_cfg[\"saved_data\"], \"a\") as f:\n                df.to_csv(f, header=header)\n\n    # data aggregator thread\n    results_thread = td.Thread(target=results_writer)\n    results_thread.start()\n\n    if source_cfg[\"type\"] == \"camera\":\n        # import module\n        import picamera\n\n        # start the pi camera\n        with picamera.PiCamera() as camera:\n            # configure the camera\n            camera.sensor_mode = source_cfg[\"sensor_mode\"]\n            camera.resolution = source_cfg[\"resolution\"]\n            camera.framerate = source_cfg[\"framerate\"]\n            logger.info(\n                \"picamera initialized w/ mode={} resolution={} framerate={}\".format(\n                    camera.sensor_mode, camera.resolution, camera.framerate\n                )\n            )\n\n            # start recording both to disk and to the queue\n            camera.start_recording(\n                output=source_cfg[\"output_file\"], format=\"h264\", splitter_port=0, bitrate=10000000,\n            )\n            camera.start_recording(\n                output=output, format=\"mjpeg\", splitter_port=1, bitrate=10000000, quality=95,\n            )\n            logger.info(\"started recording to file and to queue\")\n\n            # wait until SIGINT is detected\n            while not killer.kill_now:\n                camera.wait_recording(timeout=0.5, splitter_port=0)\n                camera.wait_recording(timeout=0.5, splitter_port=1)\n                logger.info(\n                    \"frames qsize: {}, broadcast qsize: {}, predicts qsize: {}\".format(\n                        frames_queue.qsize(), bc_queue.qsize(), predicts_queue.qsize()\n                    )\n                )\n\n            # stop recording\n            logger.info(\"gracefully exiting\")\n            camera.stop_recording(splitter_port=0)\n            camera.stop_recording(splitter_port=1)\n            output.stop()\n\n    elif source_cfg[\"type\"] == \"file\":\n        # open video file\n        video_reader = cv2.VideoCapture(source_cfg[\"input\"])\n        video_reader.set(cv2.CAP_PROP_POS_FRAMES, source_cfg[\"frames_to_skip\"])\n\n        # get # of frames and determine target width\n        nb_frames = int(video_reader.get(cv2.CAP_PROP_FRAME_COUNT))\n        frame_h = int(video_reader.get(cv2.CAP_PROP_FRAME_HEIGHT))\n        frame_w = int(video_reader.get(cv2.CAP_PROP_FRAME_WIDTH))\n        target_h = int(frame_h * source_cfg[\"scale_video\"])\n        target_w = int(frame_w * source_cfg[\"scale_video\"])\n        period = 1.0 / source_cfg[\"framerate\"]\n\n        logger.info(\n            \"file-based video stream initialized w/ resolution={} framerate={} and {} skipped frames\".format(\n                (target_w, target_h), source_cfg[\"framerate\"], source_cfg[\"frames_to_skip\"],\n            )\n        )\n\n        # serve each frame to the workers iteratively\n        last_log = time.time()\n        for i in range(nb_frames):\n            start = time.time()\n            try:\n                # write frame to queue\n                _, frame = video_reader.read()\n                if target_w != frame_w:\n                    frame = resize_image(frame, target_w)\n                jpeg = image_to_jpeg_bytes(frame)\n                output.write(jpeg)\n            except Exception as error:\n                logger.error(\"unexpected error occurred\", exc_info=True)\n                break\n            end = time.time()\n            spent = end - start\n            left = period - spent\n            if left > 0:\n                # maintain framerate\n                time.sleep(period)\n\n            # check if SIGINT has been sent\n            if killer.kill_now:\n                break\n\n            # do logs every second\n            current = time.time()\n            if current - last_log >= 1.0:\n                logger.info(\n                    \"frames qsize: {}, broadcast qsize: {}, predicts qsize: {}\".format(\n                        frames_queue.qsize(), bc_queue.qsize(), predicts_queue.qsize()\n                    )\n                )\n                last_log = current\n\n        logger.info(\"gracefully exiting\")\n        video_reader.release()\n        output.stop()\n\n    if gps_cfg[\"use_gps\"]:\n        gps.stop()\n\n    reassembler.stop()\n    flusher.stop()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "broadcast.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nimport io\nimport logging\nimport socketserver\nfrom threading import Condition\nfrom http import server\n\nPAGE = \"\"\"\\\n<html>\n<head>\n<title>License Plate Predictions</title>\n</head>\n<body>\n<h1>Live License Plate Predictions</h1>\n<img src=\"stream.mjpg\" width=\"1280\" height=\"720\" />\n</body>\n</html>\n\"\"\"\n\nlogger = logging.getLogger(__name__)\n\n\nclass StreamingOutput(object):\n    def __init__(self):\n        self.frame = None\n        self.buffer = io.BytesIO()\n        self.condition = Condition()\n\n    def write(self, buf):\n        if buf.startswith(b\"\\xff\\xd8\"):\n            # New frame, copy the existing buffer's content and notify all\n            # clients it's available\n            self.buffer.truncate()\n            with self.condition:\n                self.frame = self.buffer.getvalue()\n                self.condition.notify_all()\n            self.buffer.seek(0)\n        return self.buffer.write(buf)\n\n\nclass StreamingHandler(server.BaseHTTPRequestHandler):\n    def do_GET(self):\n        if self.path == \"/\":\n            self.send_response(301)\n            self.send_header(\"Location\", \"/index.html\")\n            self.end_headers()\n        elif self.path == \"/index.html\":\n            content = PAGE.encode(\"utf-8\")\n            self.send_response(200)\n            self.send_header(\"Content-Type\", \"text/html\")\n            self.send_header(\"Content-Length\", len(content))\n            self.end_headers()\n            self.wfile.write(content)\n        elif self.path == \"/stream.mjpg\":\n            self.send_response(200)\n            self.send_header(\"Age\", 0)\n            self.send_header(\"Cache-Control\", \"no-cache, private\")\n            self.send_header(\"Pragma\", \"no-cache\")\n            self.send_header(\"Content-Type\", \"multipart/x-mixed-replace; boundary=FRAME\")\n            self.end_headers()\n            try:\n                while True:\n                    with output.condition:\n                        output.condition.wait()\n                        frame = output.frame\n                    self.wfile.write(b\"--FRAME\\r\\n\")\n                    self.send_header(\"Content-Type\", \"image/jpeg\")\n                    self.send_header(\"Content-Length\", len(frame))\n                    self.end_headers()\n                    self.wfile.write(frame)\n                    self.wfile.write(b\"\\r\\n\")\n            except Exception as e:\n                logging.warning(\"Removed streaming client %s: %s\", self.client_address, str(e))\n        else:\n            self.send_error(404)\n            self.end_headers()\n\n    def set_output(output):\n        self.output = output\n\n\nclass StreamingServer(socketserver.ThreadingMixIn, server.HTTPServer):\n    allow_reuse_address = True\n    daemon_threads = True\n\n\noutput = StreamingOutput()\n"
  },
  {
    "path": "config.json",
    "content": "{\n    \"video_source\": {\n        \"type\": \"file\",\n        \"input\": \"airport_ride_480p.mp4\",\n        \"scale_video\": 1.0,\n        \"frames_to_skip\": 0,\n        \"framerate\": 30\n    },\n    \"broadcaster\": {\n        \"target_buffer_size\": 10,\n        \"max_buffer_size_variation\": 5,\n        \"max_fps_variation\": 15,\n        \"target_fps\": 30,\n        \"serve_address\": [\"0.0.0.0\", 8000]\n    },\n    \"inferencing_worker\": {\n        \"yolov3_input_size_px\": 416,\n        \"bounding_boxes_upscale_px\": 640,\n        \"yolov3_obj_thresh\": 0.8,\n        \"crnn_quality\": 98,\n        \"broadcast_quality\": 90,\n        \"timeout\": 1.20,\n        \"api_endpoint_yolov3\": \"http://a23893c574c0511ea9f430a8bed50c69-1100298247.eu-central-1.elb.amazonaws.com/yolov3\",\n        \"api_endpoint_crnn\": \"http://a23893c574c0511ea9f430a8bed50c69-1100298247.eu-central-1.elb.amazonaws.com/crnn\"\n    },\n    \"inferencing_pool\": {\n        \"workers\": 24,\n        \"pick_every_nth_frame\": 1\n    },\n    \"flusher\": {\n        \"frame_count_threshold\": 5\n    },\n    \"gps\": {\n        \"use_gps\": false,\n        \"write_port\": \"/dev/ttyUSB2\",\n        \"read_port\": \"/dev/ttyUSB1\",\n        \"baudrate\": 115200\n    },\n    \"general\": {\n        \"bind_ip\": \"0.0.0.0\",\n        \"saved_data\": \"saved_data.csv\"\n    }\n}\n"
  },
  {
    "path": "deps/requirements_base.txt",
    "content": "certifi==2019.11.28\nidna==2.8\nnumpy==1.18.1\nopencv-contrib-python==4.1.0.25\nrequests==2.22.0\nrequests-toolbelt==0.9.1\nurllib3==1.25.8\npynmea2==1.15.0\nClick==7.0\n"
  },
  {
    "path": "deps/requirements_dpkg_rpi.txt",
    "content": "build-essential\ncmake\nunzip\npkg-config\nlibjpeg-dev\nlibtiff5-dev\nlibjasper-dev\nlibpng-dev\nlibavcodec-dev\nlibavformat-dev\nlibswscale-dev\nlibv4l-dev\nlibxvidcore-dev\nlibx264-dev\nlibfontconfig1-dev\nlibcairo2-dev\nlibgdk-pixbuf2.0-dev\nlibpango1.0-dev\nlibgtk2.0-dev\nlibgtk-3-dev\nlibatlas-base-dev\ngfortran\nlibhdf5-dev\nlibhdf5-serial-dev\nlibhdf5-103\nlibqtgui4\nlibqtwebkit4\nlibqt4-test\npython3-pyqt5\npython3-dev\npython3-pandas\n"
  },
  {
    "path": "deps/requirements_rpi.txt",
    "content": "certifi==2019.11.28\nidna==2.8\nnumpy==1.18.1\nopencv-contrib-python==4.1.0.25\npicamera==1.13\nrequests==2.22.0\nrequests-toolbelt==0.9.1\nurllib3==1.25.8\npynmea2==1.15.0\nClick==7.0\n"
  },
  {
    "path": "drawings/License Plate Identifier Client.drawio",
    "content": "<mxfile host=\"www.draw.io\" modified=\"2020-02-13T20:55:49.174Z\" agent=\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.2 Safari/605.1.15\" etag=\"NyBexjbPMC27BK2-6498\" version=\"12.7.0\" type=\"device\"><diagram name=\"Page-1\" id=\"c7558073-3199-34d8-9f00-42111426c3f3\">7Vxbc9o4GP01zLQP7UiWr48Bkm6n3Vm2dNtmX3YUrIC3xqK2aML++pVsGdtCECf4EkrSmWJdLGSdT+e7yQzQaHn/Lsarxe/UJ+HAAP79AI0HhgE92+UfomYja1zLyWrmceDLuqJiGvxHZCWQtevAJ0mlI6M0ZMGqWjmjUURmrFKH45jeVbvd0rD6rSs8JzsV0xkOd2u/Bj5bZLUGQl7R8BsJ5gv51QgBOfMbPPs+j+k6kl84MNBt+pc1L3E+mOyfLLBP70pV6HKARjGlLLta3o9IKFY3X7fsvqs9rduJxyRidW74+HcQLK4/vHHG3/768H7849M/DLyBpoTrJw7XJH+QdLpsk69R+pBEDAMGaHi3CBiZrvBMtN5xseB1C7YMeQnySx8ni7RvXphgxkgcpTUG4I89TFhMv5MRDWmcfgFfUvHHW25pxKSMQFuWS/3Msfgn6oMw/INPIWAbOasQ35BwQpOABVR814yvCuG3DX+SmAUc749KB0bFxHEYzLXdL2TDDWWMLnmDXCbeTO73IgC3uPIdQ+iSsHjDu+Q3WLnsyN1imrJ8V4ge8mTdoiR1Ti5EWIr7fDt4gTi/kKA/SgCsFwHoSgBMpOBv7eLvmd3ib6CG8T8Mbj0wa8hQE/sRwCoe2/1ZwsNwNXggqzU8QA08iM91mCzSmC3onEY4vCxqh1XEij4fqZD6dFn/JYxt5F7Da0arKO7Dii9tvPkmx00L16Lw1sqL4/ty4zhHNHsGMfHDYPHnpOt4Rg4skWdLOwHHc8IOdXT18MckxCz4WZ1JC0hafQD3VID2Ad4LcF6/wNm9AHcfsG+l6xJsvFSgJgqbMoQnDjYEoE+083lW+NYOmVylihjYP9Y0b3iTpEBe8A7QXt0XjfxqLj7fR7eELxtfJANMYjojSZIPzCeajZ31PKBv4cP6VmjPEpa3ACOAdXp4OHa0elh6LY2oVNepalTb2NWo0NBoVLs1hQrdPnZzwuWeXQgHVRiXIU6SYJZXXwVhwdb+bideWerSxV516+5VaB+5V9Nb+SPjTanDigYRS0ojT0RFyVAzFLGyLMXXfegG27EO32ChI2/wUOUGfpE9ZSG52+U6gqzcDsjqK42/c1eId22Fr3Z4yRg79kFeUhjOBtiDTlMugKtI1jMgLO9XNz9Oixy9EyNHG50pOXpdkqNxhuRoa+IjHZPjNpp+Ek71SREdzNE9GaZzwXky3RapbqguOkOqc5zeqQ5ADcgq9xXMEdGIqKHz/QHdq/SvQm0Fm12XaW8PteX2YmEjXpdNRK292AtZ7QsmloC1NLjmdY+jtF2CgErOz0GKwGSPKm87RDXQUrjJVUbKlmJnpOZYx2iGdVK6UYgnEjyzXt6khENvBT4p/dSOmfG9z6oS/2Bqbxn4fmYfED5DfJMOJeRUKiI+rjUcWELNCpMgkXnJA9pYTV82wky2IkBQw0y2RoLbyxkiuLP8nRphb02zaod5nvkQXYnShMQBXwIhDLvmmeTPsm0G6ttmdZLWjdKfU9dWA/1mwZBGaXUqK4/SaucgEd6xxvuRAnFSadFzEAiI0JES8SR3zlFMbu8Bb07tv9WM7fpaqJf8kd68zlv05vV5yKrTK3uZ/aizvoLzv6wUHXvC5EmM51qPYzy1//bAWsvRJd2Z4CaiS1MhRoP0OMQ8xstuwkoXFgAm2A0f+S4ADjoQbmrAeTMV383tP6oE60SVzvp8ofSYavlWjdDIzr631UiPety09UhPW+cMJgFvGuElifFAyP0V//9L4BPKP7nyEGHnaYaPAT4vYoL9bkgCAPvy4uoAGSjkAW8wJA0dQzZVtHVHEHQBnhZJoh8f7QlWTidmRd0TAD171vk8G9+1V1xZix3555rwgWtuyGSBV+KSLzkOQxJmKh8NV6UgXKWtFJ179P4dAs96TO6IoJkLQDP7165z5lH3VkeL+7eBQ62uThJGNF0oA1xM3nNDF7z6dDn9nBXFdov8zAR+3Q1rm8gYW0591B3bBi5uBnVoKbBvwxPll0eQBnYEW8Pd0Bl3j8PdsPR5YfH6DWFnn41B6pGYPJxZ3uxAh3p7oPf6ugJ4a/QSoWpb6efvxT3sBBhmr1rf6Dk+6Zw7/P0GJJHZL/z10d+Xkyuc+V8uIGmYdaXo6BTMkSRituQ6cFHD/gwn7MV92GNSeGqM0NO4D7qXkNtzH3Jn5flzyilzQ3fJiiO5oa10wCQmfjATCZMXatB7G/D5cQPS/T5Bo5oiPQL4Ckd+KuJ8w5IldwbjdmIKT/m1ksqLtAdoY5+IbF/RbUJ7eFUJgQD1nmIyUS/q4zSOfD0lyN2k2kHPPEcODTXpDatZ8raO+exquXeT6aBIQ3XNMpWU1BNYptEUla2yTP+JbLSb7RgHyXdeM14vV5kSGU2/vH6BUG4jRVN4mnh114ri+DQF0toSozAg0fMNVtd3NY5LTLlWBXFL9+YA7DRWbR5/sGAf4nTtC9XwdXrusEPLU3DX/Myg0VCOgheLn7DMlHHxS6Ho8n8=</diagram></mxfile>"
  },
  {
    "path": "gps.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nimport serial, pynmea2, time, threading as td\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass ReadGPSData(td.Thread):\n    \"\"\"\n    Class to read the data off of the EC25-E's GPS module.\n\n    Can be easily adapted to work with any other GPS module.\n    \"\"\"\n\n    def __init__(self, write_port, read_port, baudrate, name=\"GPS\"):\n        \"\"\"\n        write_port - The serial port to use for activating the GPS.\n        read_port - The serial port from which to read the GPS data.\n        baudrate - Transport rate over the serial port.\n        name - Name of the thread.\n        \"\"\"\n        super(ReadGPSData, self).__init__(name=name)\n        self.write_port = write_port\n        self.read_port = read_port\n        self.baudrate = baudrate\n        self.event = td.Event()\n        self.lock = td.Lock()\n\n    def run(self):\n        logger.info(\"configuring GPS on port {}\".format(self.write_port))\n        self.serw = serial.Serial(\n            self.write_port, baudrate=self.baudrate, timeout=1, rtscts=True, dsrdtr=True\n        )\n        self.serw.write(\"AT+QGPS=1\\r\".encode(\"utf-8\"))\n        self.serw.close()\n        time.sleep(0.5)\n\n        self.serr = serial.Serial(\n            self.read_port, baudrate=self.baudrate, timeout=1, rtscts=True, dsrdtr=True\n        )\n        logger.info(\"configured GPS to read from port {}\".format(self.read_port))\n\n        while not self.event.is_set():\n            data = self.serr.readline()\n            self.lock.acquire()\n            try:\n                self.__msg = pynmea2.parse(data.decode(\"utf-8\"))\n            except:\n                pass\n            finally:\n                self.lock.release()\n            logger.info(self.__msg)\n            time.sleep(1)\n\n        logger.info(\"stopped GPS thread\")\n\n    @property\n    def parsed(self):\n        \"\"\"\n        Get the whole parsed data.\n        \"\"\"\n        self.lock.acquire()\n        try:\n            data = self.__msg\n        except:\n            data = None\n        finally:\n            self.lock.release()\n        return data\n\n    @property\n    def latitude(self):\n        \"\"\"\n        Returns latitude expressed as a float.\n        \"\"\"\n        self.lock.acquire()\n        try:\n            latitude = self.__msg.latitude\n        except:\n            latitude = 0.0\n        finally:\n            self.lock.release()\n        return latitude\n\n    @property\n    def longitude(self):\n        \"\"\"\n        Returns longitude expressed as a float.\n        \"\"\"\n        self.lock.acquire()\n        try:\n            longitude = self.__msg.longitude\n        except:\n            longitude = 0.0\n        finally:\n            self.lock.release()\n        return longitude\n\n    def stop(self):\n        \"\"\"\n        Stop the thread.\n        \"\"\"\n        self.event.set()\n"
  },
  {
    "path": "utils/bbox.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nimport numpy as np\nimport cv2\nfrom .colors import get_color\n\n\nclass BoundBox:\n    def __init__(self, xmin, ymin, xmax, ymax, c=None, classes=None):\n        self.xmin = xmin\n        self.ymin = ymin\n        self.xmax = xmax\n        self.ymax = ymax\n\n        self.c = c\n        self.classes = classes\n\n        self.label = -1\n        self.score = -1\n\n    def get_label(self):\n        if self.label == -1:\n            self.label = np.argmax(self.classes)\n\n        return self.label\n\n    def get_score(self):\n        if self.score == -1:\n            self.score = self.classes[self.get_label()]\n\n        return self.score\n\n\ndef draw_boxes(image, boxes, overlay_text, labels, obj_thresh, quiet=True):\n    for box, overlay in zip(boxes, overlay_text):\n        label_str = \"\"\n        label = -1\n\n        for i in range(len(labels)):\n            if box.classes[i] > obj_thresh:\n                if label_str != \"\":\n                    label_str += \", \"\n                label_str += labels[i] + \" \" + str(round(box.get_score() * 100, 2)) + \"%\"\n                label = i\n            if not quiet:\n                print(label_str)\n\n        if label >= 0:\n            if len(overlay) > 0:\n                text = label_str + \": [\" + \" \".join(overlay) + \"]\"\n            else:\n                text = label_str\n            text = text.upper()\n            text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.1e-3 * image.shape[0], 5)\n            width, height = text_size[0][0], text_size[0][1]\n            region = np.array(\n                [\n                    [box.xmin - 3, box.ymin],\n                    [box.xmin - 3, box.ymin - height - 26],\n                    [box.xmin + width + 13, box.ymin - height - 26],\n                    [box.xmin + width + 13, box.ymin],\n                ],\n                dtype=\"int32\",\n            )\n\n            # cv2.rectangle(img=image, pt1=(box.xmin,box.ymin), pt2=(box.xmax,box.ymax), color=get_color(label), thickness=5)\n            rec = (box.xmin, box.ymin, box.xmax - box.xmin, box.ymax - box.ymin)\n            rec = tuple(int(i) for i in rec)\n            cv2.rectangle(img=image, rec=rec, color=get_color(label), thickness=3)\n            cv2.fillPoly(img=image, pts=[region], color=get_color(label))\n            cv2.putText(\n                img=image,\n                text=text,\n                org=(box.xmin + 13, box.ymin - 13),\n                fontFace=cv2.FONT_HERSHEY_SIMPLEX,\n                fontScale=1e-3 * image.shape[0],\n                color=(0, 0, 0),\n                thickness=1,\n            )\n\n    return image\n"
  },
  {
    "path": "utils/colors.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\n\ndef get_color(label):\n    \"\"\" Return a color from a set of predefined colors. Contains 80 colors in total.\n    code originally from https://github.com/fizyr/keras-retinanet/\n    Args\n        label: The label to get the color for.\n    Returns\n        A list of three values representing a RGB color.\n    \"\"\"\n    if label < len(colors):\n        return colors[label]\n    else:\n        print(\"Label {} has no color, returning default.\".format(label))\n        return (0, 255, 0)\n\n\ncolors = [\n    [31, 0, 255],\n    [0, 159, 255],\n    [255, 95, 0],\n    [255, 19, 0],\n    [255, 0, 0],\n    [255, 38, 0],\n    [0, 255, 25],\n    [255, 0, 133],\n    [255, 172, 0],\n    [108, 0, 255],\n    [0, 82, 255],\n    [0, 255, 6],\n    [255, 0, 152],\n    [223, 0, 255],\n    [12, 0, 255],\n    [0, 255, 178],\n    [108, 255, 0],\n    [184, 0, 255],\n    [255, 0, 76],\n    [146, 255, 0],\n    [51, 0, 255],\n    [0, 197, 255],\n    [255, 248, 0],\n    [255, 0, 19],\n    [255, 0, 38],\n    [89, 255, 0],\n    [127, 255, 0],\n    [255, 153, 0],\n    [0, 255, 255],\n    [0, 255, 216],\n    [0, 255, 121],\n    [255, 0, 248],\n    [70, 0, 255],\n    [0, 255, 159],\n    [0, 216, 255],\n    [0, 6, 255],\n    [0, 63, 255],\n    [31, 255, 0],\n    [255, 57, 0],\n    [255, 0, 210],\n    [0, 255, 102],\n    [242, 255, 0],\n    [255, 191, 0],\n    [0, 255, 63],\n    [255, 0, 95],\n    [146, 0, 255],\n    [184, 255, 0],\n    [255, 114, 0],\n    [0, 255, 235],\n    [255, 229, 0],\n    [0, 178, 255],\n    [255, 0, 114],\n    [255, 0, 57],\n    [0, 140, 255],\n    [0, 121, 255],\n    [12, 255, 0],\n    [255, 210, 0],\n    [0, 255, 44],\n    [165, 255, 0],\n    [0, 25, 255],\n    [0, 255, 140],\n    [0, 101, 255],\n    [0, 255, 82],\n    [223, 255, 0],\n    [242, 0, 255],\n    [89, 0, 255],\n    [165, 0, 255],\n    [70, 255, 0],\n    [255, 0, 172],\n    [255, 76, 0],\n    [203, 255, 0],\n    [204, 0, 255],\n    [255, 0, 229],\n    [255, 133, 0],\n    [127, 0, 255],\n    [0, 235, 255],\n    [0, 255, 197],\n    [255, 0, 191],\n    [0, 44, 255],\n    [50, 255, 0],\n]\n"
  },
  {
    "path": "utils/image.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nimport cv2\nimport numpy as np\n\n\ndef resize_image(image, desired_width):\n    current_width = image.shape[1]\n    scale_percent = desired_width / current_width\n    width = int(image.shape[1] * scale_percent)\n    height = int(image.shape[0] * scale_percent)\n    resized = cv2.resize(image, (width, height), interpolation=cv2.INTER_AREA)\n    return resized\n\n\ndef compress_image(image, grayscale=True, desired_width=416, top_crop_percent=0.45):\n    if grayscale:\n        image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n    image = resize_image(image, desired_width)\n    height = image.shape[0]\n    if top_crop_percent:\n        image[: int(height * top_crop_percent)] = 128\n\n    return image\n\n\ndef image_from_bytes(byte_im):\n    nparr = np.frombuffer(byte_im, np.uint8)\n    img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)\n    return img_np\n\n\ndef image_to_jpeg_nparray(image, quality=[int(cv2.IMWRITE_JPEG_QUALITY), 95]):\n    is_success, im_buf_arr = cv2.imencode(\".jpg\", image, quality)\n    return im_buf_arr\n\n\ndef image_to_jpeg_bytes(image, quality=[int(cv2.IMWRITE_JPEG_QUALITY), 95]):\n    buf = image_to_jpeg_nparray(image, quality)\n    byte_im = buf.tobytes()\n    return byte_im\n"
  },
  {
    "path": "utils/queue.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nfrom multiprocessing.queues import Queue as mp_queue\nimport multiprocessing as mp\n\n\nclass SharedCounter(object):\n    \"\"\" A synchronized shared counter.\n\n    The locking done by multiprocessing.Value ensures that only a single\n    process or thread may read or write the in-memory ctypes object. However,\n    in order to do n += 1, Python performs a read followed by a write, so a\n    second process may read the old value before the new one is written by the\n    first process. The solution is to use a multiprocessing.Lock to guarantee\n    the atomicity of the modifications to Value.\n\n    This class comes almost entirely from Eli Bendersky's blog:\n    http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing/\n\n    \"\"\"\n\n    def __init__(self, n=0):\n        self.count = mp.Value(\"i\", n)\n\n    def increment(self, n=1):\n        \"\"\" Increment the counter by n (default = 1) \"\"\"\n        with self.count.get_lock():\n            self.count.value += n\n\n    def reset(self):\n        \"\"\" Reset the counter to 0 \"\"\"\n        with self.count.get_lock():\n            self.count.value = 0\n\n    @property\n    def value(self):\n        \"\"\" Return the value of the counter \"\"\"\n        return self.count.value\n\n\nclass MPQueue(mp_queue):\n    \"\"\" A portable implementation of multiprocessing.Queue.\n\n    Because of multithreading / multiprocessing semantics, Queue.qsize() may\n    raise the NotImplementedError exception on Unix platforms like Mac OS X\n    where sem_getvalue() is not implemented. This subclass addresses this\n    problem by using a synchronized shared counter (initialized to zero) and\n    increasing / decreasing its value every time the put() and get() methods\n    are called, respectively. This not only prevents NotImplementedError from\n    being raised, but also allows us to implement a reliable version of both\n    qsize() and empty().\n\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        self.ctx = mp.get_context()\n        super(MPQueue, self).__init__(*args, **kwargs, ctx=self.ctx)\n        self.size = SharedCounter(0)\n\n    def put(self, *args, **kwargs):\n        self.size.increment(1)\n        super(MPQueue, self).put(*args, **kwargs)\n\n    def get(self, *args, **kwargs):\n        self.size.increment(-1)\n        if self.size.value < 0:\n            self.size.reset()\n        return super(MPQueue, self).get(*args, **kwargs)\n\n    def qsize(self):\n        \"\"\" Reliable implementation of multiprocessing.Queue.qsize() \"\"\"\n        return self.size.value\n\n    def empty(self):\n        \"\"\" Reliable implementation of multiprocessing.Queue.empty() \"\"\"\n        return not self.qsize()\n"
  },
  {
    "path": "workers.py",
    "content": "# WARNING: you are on the master branch, please refer to the examples on the branch that matches your `cortex version`\n\nfrom utils.image import (\n    resize_image,\n    compress_image,\n    image_from_bytes,\n    image_to_jpeg_nparray,\n    image_to_jpeg_bytes,\n)\nfrom utils.bbox import BoundBox, draw_boxes\nfrom statistics import mean\nimport time, base64, pickle, json, cv2, logging, requests, queue, broadcast, copy, statistics\n\nimport numpy as np\nimport threading as td\nimport multiprocessing as mp\n\nlogger = logging.getLogger(__name__)\nlogger.setLevel(logging.DEBUG)\n\nsession = requests.Session()\n\n\nclass WorkerTemplateThread(td.Thread):\n    def __init__(self, event_stopper, name=None, runnable=None):\n        td.Thread.__init__(self, name=name)\n        self.event_stopper = event_stopper\n        self.runnable = runnable\n\n    def run(self):\n        if self.runnable:\n            logger.debug(\"worker started\")\n            while not self.event_stopper.is_set():\n                self.runnable()\n                time.sleep(0.030)\n            logger.debug(\"worker stopped\")\n\n    def stop(self):\n        self.event_stopper.set()\n\n\nclass WorkerTemplateProcess(mp.Process):\n    def __init__(self, event_stopper, name=None, runnable=None):\n        mp.Process.__init__(self, name=name)\n        self.event_stopper = event_stopper\n        self.runnable = runnable\n\n    def run(self):\n        if self.runnable:\n            logger.debug(\"worker started\")\n            while not self.event_stopper.is_set():\n                self.runnable()\n                time.sleep(0.030)\n            logger.debug(\"worker stopped\")\n\n    def stop(self):\n        self.event_stopper.set()\n\n\nclass BroadcastReassembled(WorkerTemplateProcess):\n    \"\"\"\n    Separate process to broadcast the stream with the overlayed predictions on top of it.\n    \"\"\"\n\n    def __init__(self, in_queue, cfg, name=None):\n        \"\"\"\n        in_queue - Queue from which to extract the frames with the overlayed predictions on top of it.\n        cfg - The dictionary config for the broadcaster.\n        name - Name of the process.\n        \"\"\"\n        super(BroadcastReassembled, self).__init__(event_stopper=mp.Event(), name=name)\n        self.in_queue = in_queue\n        self.yolo3_rtt = None\n        self.crnn_rtt = None\n        self.detections = 0\n        self.current_detections = 0\n        self.recognitions = 0\n        self.current_recognitions = 0\n        self.buffer = []\n        self.oldest_broadcasted_frame = 0\n\n        for key, value in cfg.items():\n            setattr(self, key, value)\n\n    def run(self):\n        # start streaming server\n        def lambda_func():\n            server = broadcast.StreamingServer(\n                tuple(self.serve_address), broadcast.StreamingHandler\n            )\n            server.serve_forever()\n\n        td.Thread(target=lambda_func, args=(), daemon=True).start()\n        logger.info(\"listening for stream clients on {}\".format(self.serve_address))\n\n        # start polling for new processed frames from the queue and broadcast\n        logger.info(\"worker started\")\n        counter = 0\n        while not self.event_stopper.is_set():\n            if counter == self.target_fps:\n                logger.debug(\"buffer queue size: {}\".format(len(self.buffer)))\n                counter = 0\n            self.reassemble()\n            time.sleep(0.001)\n            counter += 1\n        logger.info(\"worker stopped\")\n\n    def reassemble(self):\n        \"\"\"\n        Main method to run in the loop.\n        \"\"\"\n\n        start = time.time()\n        self.pull_and_push()\n        self.purge_stale_frames()\n\n        frame, delay = self.pick_new_frame()\n        # delay loop to stabilize the video fps\n        end = time.time()\n        elapsed_time = end - start\n        elapsed_time += 0.001  # count in the millisecond in self.run\n        if delay - elapsed_time > 0.0:\n            time.sleep(delay - elapsed_time)\n        if frame:\n            # pull and push again in case\n            # write buffer (assume it takes an insignificant time to execute)\n            self.pull_and_push()\n            broadcast.output.write(frame)\n\n    def pull_and_push(self):\n        \"\"\"\n        Get new frame and push it in the broadcaster's little buffer for stabilization.\n        \"\"\"\n        try:\n            data = self.in_queue.get_nowait()\n        except queue.Empty:\n            # logger.warning(\"no data available for worker\")\n            return\n\n        # extract data\n        boxes = data[\"boxes\"]\n        frame_num = data[\"frame_num\"]\n        yolo3_rtt = data[\"avg_yolo3_rtt\"]\n        crnn_rtt = data[\"avg_crnn_rtt\"]\n        byte_im = data[\"image\"]\n\n        # run statistics\n        self.statistics(yolo3_rtt, crnn_rtt, len(boxes), 0)\n\n        # push frames to buffer and pick new frame\n        self.buffer.append({\"image\": byte_im, \"frame_num\": frame_num})\n\n    def purge_stale_frames(self):\n        \"\"\"\n        Remove any frames older than the latest broadcasted frame.\n        \"\"\"\n        new_buffer = []\n        for frame in self.buffer:\n            if frame[\"frame_num\"] > self.oldest_broadcasted_frame:\n                new_buffer.append(frame)\n        self.buffer = new_buffer\n\n    def pick_new_frame(self):\n        \"\"\"\n        Get the oldest frame from the buffer that isn't older than the last broadcasted frame.\n        \"\"\"\n        current_desired_fps = self.target_fps - self.max_fps_variation\n        delay = 1 / current_desired_fps\n        if len(self.buffer) == 0:\n            return None, delay\n\n        newlist = sorted(self.buffer, key=lambda k: k[\"frame_num\"])\n        idx_to_del = 0\n        for idx, frame in enumerate(newlist):\n            if frame[\"frame_num\"] < self.oldest_broadcasted_frame:\n                idx_to_del = idx + 1\n        newlist = newlist[idx_to_del:]\n\n        if len(newlist) == 0:\n            return None, delay\n\n        self.buffer = newlist[::-1]\n        element = self.buffer.pop()\n        frame = element[\"image\"]\n        self.oldest_broadcasted_frame = element[\"frame_num\"]\n\n        size = len(self.buffer)\n        variation = size - self.target_buffer_size\n        var_perc = variation / self.max_buffer_size_variation\n        current_desired_fps = self.target_fps + var_perc * self.max_fps_variation\n        if current_desired_fps < 0:\n            current_desired_fps = self.target_fps - self.max_fps_variation\n        try:\n            delay = 1 / current_desired_fps\n        except ZeroDivisionError:\n            current_desired_fps = self.target_fps - self.max_fps_variation\n            delay = 1 / current_desired_fps\n\n        return frame, delay\n\n    def statistics(self, yolo3_rtt, crnn_rtt, detections, recognitions):\n        \"\"\"\n        A bunch of RTT and detection/recognition statistics. Not used.\n        \"\"\"\n        if not self.yolo3_rtt:\n            self.yolo3_rtt = yolo3_rtt\n        else:\n            self.yolo3_rtt = self.yolo3_rtt * 0.98 + yolo3_rtt * 0.02\n        if not self.crnn_rtt:\n            self.crnn_rtt = crnn_rtt\n        else:\n            self.crnn_rtt = self.crnn_rtt * 0.98 + crnn_rtt * 0.02\n\n        self.detections += detections\n        self.current_detections = detections\n        self.recognitions += recognitions\n        self.current_recognitions = recognitions\n\n\nclass InferenceWorker(WorkerTemplateThread):\n    \"\"\"\n    Worker that receives frames from a queue, sends requests to 2 cortex APIs for inference reasons,\n    and retrieves the results and puts them in their appropriate queues.\n    \"\"\"\n\n    def __init__(self, event_stopper, in_queue, bc_queue, predicts_queue, cfg, name=None):\n        \"\"\"\n        event_stopper - Event to stop the worker.\n        in_queue - Queue that holds the unprocessed frames.\n        bc_queue - Queue to push into the frames with the overlayed predictions.\n        predicts_queue - Queue to push into the detected license plates that will eventually get written to the disk.\n        cfg - Dictionary config for the worker.\n        name - Name of the worker thread.\n        \"\"\"\n        super(InferenceWorker, self).__init__(event_stopper=event_stopper, name=name)\n        self.in_queue = in_queue\n        self.bc_queue = bc_queue\n        self.predicts_queue = predicts_queue\n        self.rtt_yolo3_ms = None\n        self.rtt_crnn_ms = 0\n        self.runnable = self.cloud_infer\n\n        for key, value in cfg.items():\n            setattr(self, key, value)\n\n    def cloud_infer(self):\n        \"\"\"\n        Main method that runs in the loop.\n        \"\"\"\n        try:\n            data = self.in_queue.get_nowait()\n        except queue.Empty:\n            # logger.warning(\"no data available for worker\")\n            return\n\n        #############################\n\n        # extract frame\n        frame_num = data[\"frame_num\"]\n        img = data[\"jpeg\"]\n        # preprocess/compress the image\n        image = image_from_bytes(img)\n        reduced = compress_image(image)\n        byte_im = image_to_jpeg_bytes(reduced)\n        # encode image\n        img_enc = base64.b64encode(byte_im).decode(\"utf-8\")\n        img_dump = json.dumps({\"img\": img_enc})\n\n        # make inference request\n        resp = self.yolov3_api_request(img_dump)\n        if not resp:\n            return\n\n        #############################\n\n        # parse response\n        r_dict = resp.json()\n        boxes_raw = r_dict[\"boxes\"]\n        boxes = []\n        for b in boxes_raw:\n            box = BoundBox(*b)\n            boxes.append(box)\n\n        # purge bounding boxes with a low confidence score\n        aux = []\n        for b in boxes:\n            label = -1\n            for i in range(len(b.classes)):\n                if b.classes[i] > self.yolov3_obj_thresh:\n                    label = i\n            if label >= 0:\n                aux.append(b)\n        boxes = aux\n        del aux\n\n        # also scale the boxes for later uses\n        camera_source_width = image.shape[1]\n        boxes640 = self.scale_bbox(boxes, self.yolov3_input_size_px, self.bounding_boxes_upscale_px)\n        boxes_source = self.scale_bbox(boxes, self.yolov3_input_size_px, camera_source_width)\n\n        #############################\n\n        # recognize the license plates in case\n        # any bounding boxes have been detected\n        dec_words = []\n        if len(boxes) > 0 and len(self.api_endpoint_crnn) > 0:\n            # create set of images of the detected license plates\n            lps = []\n            try:\n                for b in boxes_source:\n                    lp = image[b.ymin : b.ymax, b.xmin : b.xmax]\n                    jpeg = image_to_jpeg_nparray(\n                        lp, [int(cv2.IMWRITE_JPEG_QUALITY), self.crnn_quality]\n                    )\n                    lps.append(jpeg)\n            except:\n                logger.warning(\"encountered error while converting to jpeg\")\n                pass\n\n            lps = pickle.dumps(lps, protocol=0)\n            lps_enc = base64.b64encode(lps).decode(\"utf-8\")\n            lps_dump = json.dumps({\"imgs\": lps_enc})\n\n            # make request to rcnn API\n            dec_lps = self.rcnn_api_request(lps_dump)\n            dec_lps = self.reorder_recognized_words(dec_lps)\n            for dec_lp in dec_lps:\n                dec_words.append([word[0] for word in dec_lp])\n\n        if len(dec_words) > 0:\n            logger.info(\"Detected the following words: {}\".format(dec_words))\n        else:\n            dec_words = [[] for i in range(len(boxes))]\n\n        #############################\n\n        # draw detections\n        upscaled = resize_image(image, self.bounding_boxes_upscale_px)\n        draw_image = draw_boxes(\n            upscaled,\n            boxes640,\n            overlay_text=dec_words,\n            labels=[\"LP\"],\n            obj_thresh=self.yolov3_obj_thresh,\n        )\n        draw_byte_im = image_to_jpeg_bytes(\n            draw_image, [int(cv2.IMWRITE_JPEG_QUALITY), self.broadcast_quality]\n        )\n\n        #############################\n\n        # push data for further processing in the queue\n        output = {\n            \"boxes\": boxes,\n            \"frame_num\": frame_num,\n            \"avg_yolo3_rtt\": self.rtt_yolo3_ms,\n            \"avg_crnn_rtt\": self.rtt_crnn_ms,\n            \"image\": draw_byte_im,\n        }\n        self.bc_queue.put(output)\n\n        # push predictions to write to disk\n        if len(dec_words) > 0:\n            timestamp = time.time()\n            literal_time = time.ctime(timestamp)\n            predicts = {\"predicts\": dec_words, \"date\": literal_time}\n            self.predicts_queue.put(predicts)\n\n        logger.info(\n            \"Frame Count: {} - Avg YOLO3 RTT: {}ms - Avg CRNN RTT: {}ms - Detected: {}\".format(\n                frame_num, int(self.rtt_yolo3_ms), int(self.rtt_crnn_ms), len(boxes)\n            )\n        )\n\n    def scale_bbox(self, boxes, old_width, new_width):\n        \"\"\"\n        Scale a bounding box.\n        \"\"\"\n        boxes = copy.deepcopy(boxes)\n        scale_percent = new_width / old_width\n        for b in boxes:\n            b.xmin = int(b.xmin * scale_percent)\n            b.ymin = int(b.ymin * scale_percent)\n            b.xmax = int(b.xmax * scale_percent)\n            b.ymax = int(b.ymax * scale_percent)\n        return boxes\n\n    def yolov3_api_request(self, img_dump):\n        \"\"\"\n        Make a request to the YOLOv3 API.\n        \"\"\"\n        # make inference request\n        try:\n            start = time.time()\n            resp = None\n            resp = session.post(\n                self.api_endpoint_yolov3,\n                data=img_dump,\n                headers={\"content-type\": \"application/json\"},\n                timeout=self.timeout,\n            )\n        except requests.exceptions.Timeout as e:\n            logger.warning(\"timeout on yolov3 inference request\")\n            time.sleep(0.10)\n            return None\n        except Exception as e:\n            time.sleep(0.10)\n            logger.warning(\"timing/connection error on yolov3\", exc_info=True)\n            return None\n        finally:\n            end = time.time()\n            if not resp:\n                pass\n            elif resp.status_code != 200:\n                logger.warning(\"received {} status code from yolov3 api\".format(resp.status_code))\n                return None\n\n        # calculate average rtt (use complementary filter)\n        current = int((end - start) * 1000)\n        if not self.rtt_yolo3_ms:\n            self.rtt_yolo3_ms = current\n        else:\n            self.rtt_yolo3_ms = self.rtt_yolo3_ms * 0.98 + current * 0.02\n\n        return resp\n\n    def rcnn_api_request(self, lps_dump, timeout=1.200):\n        \"\"\"\n        Make a request to the CRNN API.\n        \"\"\"\n        # make request to rcnn API\n        try:\n            start = time.time()\n            resp = None\n            resp = session.post(\n                self.api_endpoint_crnn,\n                data=lps_dump,\n                headers={\"content-type\": \"application/json\"},\n                timeout=self.timeout,\n            )\n        except requests.exceptions.Timeout as e:\n            logger.warning(\"timeout on crnn inference request\")\n        except:\n            logger.warning(\"timing/connection error on crnn\", exc_info=True)\n        finally:\n            end = time.time()\n            dec_lps = []\n            if not resp:\n                pass\n            elif resp.status_code != 200:\n                logger.warning(\"received {} status code from crnn api\".format(resp.status_code))\n            else:\n                r_dict = resp.json()\n                dec_lps = r_dict[\"license-plates\"]\n\n        # calculate average rtt (use complementary filter)\n        current = int((end - start) * 1000)\n        self.rtt_crnn_ms = self.rtt_crnn_ms * 0.98 + current * 0.02\n\n        return dec_lps\n\n    def reorder_recognized_words(self, detected_images):\n        \"\"\"\n        Reorder the detected words in each image based on the average horizontal position of each word.\n        Sorting them in ascending order.\n        \"\"\"\n\n        reordered_images = []\n        for detected_image in detected_images:\n\n            # computing the mean average position for each word\n            mean_horizontal_positions = []\n            for words in detected_image:\n                box = words[1]\n                y_positions = [point[0] for point in box]\n                mean_y_position = mean(y_positions)\n                mean_horizontal_positions.append(mean_y_position)\n            indexes = np.argsort(mean_horizontal_positions)\n\n            # and reordering them\n            reordered = []\n            for index, words in zip(indexes, detected_image):\n                reordered.append(detected_image[index])\n            reordered_images.append(reordered)\n\n        return reordered_images\n\n\nclass Flusher(WorkerTemplateThread):\n    \"\"\"\n    Thread which removes the elements of a queue when its size crosses a threshold.\n    Used when there are too many frames are pilling up in the queue.\n    \"\"\"\n\n    def __init__(self, queue, threshold, name=None):\n        \"\"\"\n        queue - Queue to remove the elements from when the threshold is triggered.\n        threshold - Number of elements.\n        name - Name of the thread.\n        \"\"\"\n        super(Flusher, self).__init__(event_stopper=td.Event(), name=name)\n        self.queue = queue\n        self.threshold = threshold\n        self.runnable = self.flush_pipe\n\n    def flush_pipe(self):\n        \"\"\"\n        Main method to run in the loop.\n        \"\"\"\n        current = self.queue.qsize()\n        if current > self.threshold:\n            try:\n                for i in range(current):\n                    self.queue.get_nowait()\n                logger.warning(\"flushed {} elements from the frames queue\".format(current))\n            except queue.Empty:\n                logger.debug(\"flushed too many elements from the queue\")\n        time.sleep(0.5)\n"
  }
]