[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Seungsu Lim\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "### Overview\nReal time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition. \n\nThis project is based on [darknet_server](https://github.com/imsoo/darknet_server). if you want to run this experiment take a look how to build [here](https://github.com/imsoo/darknet_server#how-to-build). \n\n\n| ```Fight Detection System Pipeline``` |\n|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71320889-4e737b80-24f5-11ea-8aac-4b4a527c6e64.png\" width=\"100%\" height=\"45%\">|\n\n### Pose Estimation and Object Tracking\nMade pipeline to get 2D pose time series data in video sequence.\n\nIn worker process, Pose Estimation is performed using ***OpenPose***. Input image pass through the Pose_detector, and get the people object which packed up people joint coordinates. People object serialized and send to sink process.\n\nIn Sink process, people object convert to person objects. and every person object are send to Tracker. Tracker receive person object and produces object identities Using ***SORT***(simple online and realtime tracking algorithm).\n\nFinally, can get the joint time series data per each person. each person's Time series data is managed by queue container. so person object always maintain recent 32 frame.\n\n| ```Tracking Pipeline``` | ```Time series data``` |\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71316697-6b895980-24b7-11ea-92a1-33ec0dcd996a.png\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71302619-b5f3d300-23f0-11ea-9ab0-36791dd9aa48.png\" width=\"150%\" height=\"30%\">|\n\n* #### Examples of Result (Pose Estimation and Object Tracking)\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71260111-64f6c700-237d-11ea-918c-1e8d9f05d963.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71260228-b3a46100-237d-11ea-9116-b050841a760b.gif\" width=\"150%\" height=\"30%\">|\n|:---:|:---:|\n\n\n### Collect action video\nCollected Four Action type (Standing, Walking, Punching, Kicking) video.\n\nPunching Action type video is a subset of the ***Berkeley Multimodal Human Action Database (MHAD) dataset***.\nThis video data is comprised of 12 subjects doing the punching actions for 5 repetitions, filmed from 4 angles. (http://tele-immersion.citris-uc.org/berkeley_mhad)\n\nOthers (Standing, Walking, Kicking) are subsets of the ***CMU Panoptic Dataset***. In Range of Motion videos(171204_pose3, 171204_pose5, 171204_pose6), cut out 3 action type. I recorded timestamp per action type and cut out the video using python script(util/concat.py). This video data is comprised of 13 subjects doing the three actions, filmed from 31 angles. (http://domedb.perception.cs.cmu.edu/index.html)\n\n``` jsonc\n0, 1, 10    // 0 : Standing, 1 : begin timestamp, 10: end timestamp\n1, 11, 15   // 1 : Walking, 11 : begin timestamp, 15: end timestamp\n3, 39, 46   // 3 : Kicking, 39 : begin timestamp, 46: end timestamp\n```\n\n* #### Examples of Dataset (Stand & Walk)\n<table>\n<tr><th><code>Stand (CMU Panoptic Dataset)</code></th><th><code>Walk (CMU Panoptic Dataset)</code></th>\n<tr valign=\"middle\">\n<td>\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71252302-f65b3e80-2367-11ea-8718-a25a0ac7f14b.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71252304-f6f3d500-2367-11ea-9b8d-f5eff5b5959c.gif\" width=\"150%\" height=\"30%\">|\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71252305-f78c6b80-2367-11ea-868e-988a94d28bb7.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71252307-f8250200-2367-11ea-9ab0-b1fc29672555.gif\" width=\"150%\" height=\"30%\">|\n</td>\n<td>\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71253038-0aa03b00-236a-11ea-99ae-80cd7bd1a284.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71253041-0b38d180-236a-11ea-834e-04c4c96a1974.gif\" width=\"150%\" height=\"30%\">|\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71253042-0bd16800-236a-11ea-9a15-a0183d3be629.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71253043-0d029500-236a-11ea-94c2-993b969bb57c.gif\" width=\"150%\" height=\"30%\">|\n</td>\n</tr>\n</table>\n\n* #### Examples of Dataset (Punch & Kick)\n<table>\n<tr><th><code>Punch (Berkeley MHAD Dataset)</code></th><th><code>Kick (CMU Panoptic Dataset)</code></th>\n<tr>\n<td>\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71253986-cc584b00-236c-11ea-90e7-ff0934ffe38e.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71254000-e2660b80-236c-11ea-8aee-fb874a2d5365.gif\" width=\"150%\" height=\"30%\">|\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71254136-5f918080-236d-11ea-9730-3368f3d4b143.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71254138-602a1700-236d-11ea-8077-848ff1233b8c.gif\" width=\"150%\" height=\"30%\">|\n\n</td>\n<td>\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71253059-17bd2a00-236a-11ea-895f-ccfd3bf3fc14.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71253060-18ee5700-236a-11ea-9f23-e0b0e600d5f4.gif\" width=\"150%\" height=\"30%\">|\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71253062-1986ed80-236a-11ea-9699-8c5a5d2670b4.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71253066-1ab81a80-236a-11ea-988e-e1f1cf301031.gif\" width=\"150%\" height=\"30%\">|\n\n</td>\n</tr>\n</table>\n\n### Make training dataset\nPut action video data to tracking pipeline and get joint time series data per each person. this results (joint position) are processed to feature vector.\n\n* ***Angle*** : current frame joint angle\n* ***ΔPoint*** : A distance of prior frame joint point and current frame\n* ***ΔAngle*** : A change of prior frame joint angle and current frame.\n\n* #### Examples of feature vector (***ΔPoint*** & ***ΔAngle***)\n\n| ***ΔPoint*** | ***ΔAngle*** |\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71304150-f3af2680-2405-11ea-8837-7c741b358956.gif\" width=\"130%\" height=\"50%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71304149-f1e56300-2405-11ea-8699-acece9a11722.gif\" width=\"130%\" height=\"50%\">|\n\n* #### Overview of feature vector\n\n<table>\n<tr><th><code>Feature Vector</code></th><th><code>OpenPose COCO output format</code></th>\n<tr>\n<td width=\"66%\">\n\n| *IDX* | *0* | *1*  | *2* | *3* | *4* | *5* | *6* | *7* |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| ***Angle*** | 2-3 | 3-4 | 5-6 | 6-7 | 8-9 | 9-10 | 11-12 | 12-13 |\n| ***ΔAngle*** | 2-3 | 3-4 | 5-6 | 6-7 | 8-9 | 9-10 | 11-12 | 12-13 |\n| ***ΔPoint*** | 3 | 4 | 6 | 7 | 9 | 10 | 12 | 13 |\n\n###### *※ 2 : RShoulder, 3 : RElbow, 4 : RWrist, 5 : LShoulder, 6 : LElbow, 7 : LWrist, 8 : RHip, 9 : RKnee, 10 : RAnkle, 11 : LHip, 12 : LKnee, 13 : LAnkle*\n</td>\n<td width=\"34%\">\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71308335-866bb780-243e-11ea-9593-e13d80b15059.png\" width=\"100%\" height=\"30%\">|\n|:---:|\n\n</td>\n</tr>\n</table>\n\nFinally get each frame feature vector and then make action training data which consist of 32 frames feature vector. training datas are overlapped by 26 frames. so we got the four type action data set. \nA summary of the dataset  is:\n* Standing : 7474 (7474 : pose3_stand) * 32 frame\n* Walking : 4213 (854 : pose3_walk, 3359 : pose6_walk) * 32 frame\n* Punching : 2187 (1115 : mhad_punch. 1072 : mhad_punch_flip) * 32 frame\n* Kicking : 4694 (2558 : pose3_kick, 2136 : pose6_kick) * 32 frame\n* total : 18573 * 32 frames (https://drive.google.com/open?id=1ZNJDzQUjo2lDPwGoVkRLg77eA57dKUqx)\n\n|<img src=\"https://user-images.githubusercontent.com/11255376/71316454-1f3c1a80-24b3-11ea-9096-94e8cdc7adac.png\" width=\"100%\" height=\"50%\">|\n|:---:|\n\n### RNN Training and Result\nThe network used in this experiment is based on that of :\n* Guillaume Chevalier, 'LSTMs for Human Activity Recognition, 2016' https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition\n* stuarteiffert, 'RNN-for-Human-Activity-Recognition-using-2D-Pose-Input' https://github.com/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input\n\ntraining was run for 300 epochs with a batch size of 1024. (weights/action.h5)\n\nAfter training, To get action recognition result in real time, made action recognition pipeline. Sink process send each person's time series feature vector to action process as string. Action process put received data into RNN network and send back results of prediction. (0 : Standing, 1 : Walking, 2 : Punching, 3 : Kicking)\n\n| ```Action Recognition Pipeline``` | ```RNN Model``` |\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71318418-ed877b80-24d3-11ea-993c-d776d8e980c4.png\" width=\"100%\" height=\"50%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71318651-c7afa600-24d6-11ea-8baa-23153316bee8.png\" width=\"100%\" height=\"50%\">|\n\n\n* #### Examples of Result (RNN Action Recognition)\n| [```standing```](https://www.youtube.com/watch?v=Orc0Eq9bWOs) | [```Walking```](https://www.youtube.com/watch?v=Orc0Eq9bWOs) | [```Punching```](https://www.youtube.com/watch?v=kbgkeTTSau8) | [```Kicking```](https://www.youtube.com/watch?v=R1UWcG9N6tI) |\n|:---:|:---:|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71256358-d6317c80-2373-11ea-8fd9-2ae1777c8a0f.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71256359-d6ca1300-2373-11ea-812a-babb3b5b2ad5.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71256361-d7fb4000-2373-11ea-8a17-26ce9f9dc8f5.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71256362-d893d680-2373-11ea-841c-ced2f1d4ba02.gif\" width=\"150%\" height=\"30%\">|\n\n### Fight Detection\n\nThis stage check that person who kick or punch is hitting someone. if some person has hit other, those people set enemy each other.\nSystem count it as fight and then track them until they exist in frame.\n\n| ```Fight Detection Pipeline``` |\n|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71320794-cc368780-24f3-11ea-8928-8e920bc69f26.png\" width=\"100%\" height=\"50%\">|\n\n* #### Examples of Result\n| [```Fighting Championship```](https://www.youtube.com/watch?v=cIhoK4cPbC4) | [```CCTV Video```](https://www.youtube.com/watch?v=stJPOb6zW7U) |\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71256826-54dae980-2375-11ea-808b-be89bfaea5c1.gif\" width=\"150%\" height=\"30%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71356085-809fde80-25c4-11ea-90a9-a63eef4d6629.gif\" width=\"150%\" height=\"30%\">|\n\n| [```Sparring video A```](https://www.youtube.com/watch?v=x0kJmieuFzI) | [```Sparring video B```](https://www.youtube.com/watch?v=x0kJmieuFzI) |\n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71356417-821dd680-25c5-11ea-945a-d4dab5f9e1e0.gif\" width=\"130%\" height=\"60%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71356526-df198c80-25c5-11ea-98bf-11e4edf81430.gif\" width=\"130%\" height=\"60%\">|\n\n* #### Examples of Result (Failure case)\n| [```Fake Person```](https://www.youtube.com/watch?v=kbgkeTTSau8)  | [```Small Person```](https://www.youtube.com/watch?v=aUdKzb4LGJI) | \n|:---:|:---:|\n|<img src=\"https://user-images.githubusercontent.com/11255376/71256575-7daeaf00-2374-11ea-82dd-579a07788acc.gif\" width=\"130%\" height=\"50%\">|<img src=\"https://user-images.githubusercontent.com/11255376/71257257-94ee9c00-2376-11ea-8940-659a7eae08b8.gif\" width=\"130%\" height=\"50%\">|\n\n### References\n* #### Darknet : https://github.com/AlexeyAB/darknet\n* #### OpenCV : https://github.com/opencv/opencv\n* #### ZeroMQ : https://github.com/zeromq/libzmq\n* #### json-c : https://github.com/json-c/json-c \n* #### openpose-darknet : https://github.com/lincolnhard/openpose-darknet\n* #### sort-cpp : https://github.com/mcximing/sort-cpp\n* #### cpp-base64 : https://github.com/ReneNyffenegger/cpp-base64\n* #### mem_pool : https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it\n* #### share_queue : https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe\n* #### RNN-for-Human-Activity-Recognition-using-2D-Pose-Input : https://github.com/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input\n* #### LSTM-Human-Activity-Recognition : https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition\n"
  },
  {
    "path": "client/darknet_client/Makefile",
    "content": "DEBUG = 1\n\nCPP = g++\nCOMMON = -DOPENCV\nCXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV\nLDFLAGS = -lstdc++ -lpthread -lzmq -lrt -ltbb\n\nCXXFLAGS += `pkg-config --cflags json-c`\nCXXFLAGS += `pkg-config --cflags opencv`\n\nLDFLAGS += `pkg-config --libs json-c`\nLDFLAGS += `pkg-config --libs opencv`\n\nifeq ($(DEBUG), 1)\nCOMMON += -DDEBUG\nendif\n\nVPATH = ./src/\nOBJDIR = ./obj/\nDEPS = $(wildcard src/*.h*)\n\nEXEC1 = darknet_client\nEXEC1_OBJ = main.o frame.o mem_pool.o base64.o args.o util.o\nEXEC1_OBJS = $(addprefix $(OBJDIR), $(EXEC1_OBJ))\n\nOBJS = $(EXEC1_OBJS) \nEXECS = $(EXEC1) \n\nall: $(OBJDIR) $(EXECS)\n\n$(EXEC1): $(EXEC1_OBJS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)\n\n$(OBJDIR)%.o: %.cpp $(DEPS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) -c $< -o $@ \n\n$(OBJDIR):\n\tmkdir -p $(OBJDIR) res\n\nclean:\n\trm -rf $(OBJS) $(EXECS) \n"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"15.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"src\\args.cpp\" />\n    <ClCompile Include=\"src\\base64.cpp\" />\n    <ClCompile Include=\"src\\frame.cpp\" />\n    <ClCompile Include=\"src\\main.cpp\" />\n    <ClCompile Include=\"src\\mem_pool.cpp\" />\n    <ClCompile Include=\"src\\util.cpp\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"src\\args.hpp\" />\n    <ClInclude Include=\"src\\base64.hpp\" />\n    <ClInclude Include=\"src\\frame.hpp\" />\n    <ClInclude Include=\"src\\mem_pool.hpp\" />\n    <ClInclude Include=\"src\\share_queue.hpp\" />\n    <ClInclude Include=\"src\\util.hpp\" />\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <VCProjectVersion>15.0</VCProjectVersion>\n    <ProjectGuid>{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}</ProjectGuid>\n    <RootNamespace>darknetclient</RootNamespace>\n    <WindowsTargetPlatformVersion>10.0.17763.0</WindowsTargetPlatformVersion>\n    <ProjectName>darknet_client</ProjectName>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"Shared\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup />\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <SDLCheck>true</SDLCheck>\n      <ConformanceMode>true</ConformanceMode>\n    </ClCompile>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <SDLCheck>true</SDLCheck>\n      <ConformanceMode>true</ConformanceMode>\n      <AdditionalIncludeDirectories>C:\\opencv\\build\\include;C:\\git\\vcpkg\\packages\\zeromq_x64-windows-static\\include</AdditionalIncludeDirectories>\n      <PreprocessorDefinitions>ZMQ_STATIC;_MBCS;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <AdditionalDependencies>opencv_world340d.lib;C:\\git\\vcpkg\\packages\\zeromq_x64-windows-static\\debug\\lib\\libzmq-mt-sgd-4_3_3.lib;Ws2_32.lib;Iphlpapi.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>C:\\opencv\\build\\x64\\vc15\\lib;</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <SDLCheck>true</SDLCheck>\n      <ConformanceMode>true</ConformanceMode>\n    </ClCompile>\n    <Link>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <SDLCheck>true</SDLCheck>\n      <ConformanceMode>true</ConformanceMode>\n      <AdditionalIncludeDirectories>C:\\opencv\\build\\include;C:\\git\\vcpkg\\packages\\zeromq_x64-windows-static\\include</AdditionalIncludeDirectories>\n      <PreprocessorDefinitions>ZMQ_STATIC;_MBCS;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalDependencies>C:\\opencv\\build\\x64\\vc15\\lib\\opencv_world340.lib;C:\\git\\vcpkg\\packages\\zeromq_x64-windows-static\\lib\\libzmq-mt-s-4_3_3.lib;Ws2_32.lib;Iphlpapi.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>C:\\opencv\\build\\x64\\vc15\\lib;</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup>\n    <Filter Include=\"소스 파일\">\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\n    </Filter>\n    <Filter Include=\"헤더 파일\">\n      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>\n      <Extensions>h;hh;hpp;hxx;hm;inl;inc;ipp;xsd</Extensions>\n    </Filter>\n    <Filter Include=\"리소스 파일\">\n      <UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>\n      <Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>\n    </Filter>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"src\\args.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n    <ClCompile Include=\"src\\base64.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n    <ClCompile Include=\"src\\frame.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n    <ClCompile Include=\"src\\main.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n    <ClCompile Include=\"src\\mem_pool.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n    <ClCompile Include=\"src\\util.cpp\">\n      <Filter>소스 파일</Filter>\n    </ClCompile>\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"src\\args.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n    <ClInclude Include=\"src\\base64.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n    <ClInclude Include=\"src\\frame.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n    <ClInclude Include=\"src\\mem_pool.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n    <ClInclude Include=\"src\\share_queue.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n    <ClInclude Include=\"src\\util.hpp\">\n      <Filter>헤더 파일</Filter>\n    </ClInclude>\n  </ItemGroup>\n</Project>"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj.user",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"15.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LocalDebuggerEnvironment>PATH=C:\\opencv\\build\\x64\\vc15\\bin;%PATH%</LocalDebuggerEnvironment>\n    <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <LocalDebuggerCommandArguments>\n    </LocalDebuggerCommandArguments>\n    <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n  </PropertyGroup>\n</Project>"
  },
  {
    "path": "client/darknet_client/src/args.cpp",
    "content": "// https://github.com/pjreddie/template\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"args.hpp\"\n\nvoid del_arg(int argc, char **argv, int index)\n{\n  int i;\n  for (i = index; i < argc - 1; ++i) argv[i] = argv[i + 1];\n  argv[i] = 0;\n}\n\nint find_arg(int argc, char* argv[], const char *arg)\n{\n  int i;\n  for (i = 0; i < argc; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      del_arg(argc, argv, i);\n      return 1;\n    }\n  }\n  return 0;\n}\n\nint find_int_arg(int argc, char **argv, const char *arg, int def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = atoi(argv[i + 1]);\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n\nfloat find_float_arg(int argc, char **argv, const char *arg, float def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = atof(argv[i + 1]);\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n\nconst char *find_char_arg(int argc, char **argv, const char *arg, const char *def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = argv[i + 1];\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n"
  },
  {
    "path": "client/darknet_client/src/args.hpp",
    "content": "// https://github.com/pjreddie/template\n\n#ifndef ARGS_H\n#define ARGS_H\n\nint find_arg(int argc, char* argv[], const char *arg);\nint find_int_arg(int argc, char **argv, const char *arg, int def);\nfloat find_float_arg(int argc, char **argv, const char *arg, float def);\nconst char *find_char_arg(int argc, char **argv, const char *arg, const char *def);\n\n#endif"
  },
  {
    "path": "client/darknet_client/src/base64.cpp",
    "content": "﻿/*\n   base64.cpp and base64.h\n   base64 encoding and decoding with C++.\n   Version: 1.01.00\n   Copyright (C) 2004-2017 René Nyffenegger\n   This source code is provided 'as-is', without any express or implied\n   warranty. In no event will the author be held liable for any damages\n   arising from the use of this software.\n   Permission is granted to anyone to use this software for any purpose,\n   including commercial applications, and to alter it and redistribute it\n   freely, subject to the following restrictions:\n   1. The origin of this source code must not be misrepresented; you must not\n      claim that you wrote the original source code. If you use this source code\n      in a product, an acknowledgment in the product documentation would be\n      appreciated but is not required.\n   2. Altered source versions must be plainly marked as such, and must not be\n      misrepresented as being the original source code.\n   3. This notice may not be removed or altered from any source distribution.\n   René Nyffenegger rene.nyffenegger@adp-gmbh.ch\n*/\n\n#include \"base64.hpp\"\n#include <iostream>\n\nstatic const std::string base64_chars =\n\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n\"abcdefghijklmnopqrstuvwxyz\"\n\"0123456789+/\";\n\n\nstatic inline bool is_base64(unsigned char c) {\n  return (isalnum(c) || (c == '+') || (c == '/'));\n}\n\nstd::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {\n  std::string ret;\n  int i = 0;\n  int j = 0;\n  unsigned char char_array_3[3];\n  unsigned char char_array_4[4];\n\n  while (in_len--) {\n    char_array_3[i++] = *(bytes_to_encode++);\n    if (i == 3) {\n      char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;\n      char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);\n      char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);\n      char_array_4[3] = char_array_3[2] & 0x3f;\n\n      for (i = 0; (i < 4); i++)\n        ret += base64_chars[char_array_4[i]];\n      i = 0;\n    }\n  }\n\n  if (i)\n  {\n    for (j = i; j < 3; j++)\n      char_array_3[j] = '\\0';\n\n    char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;\n    char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);\n    char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);\n\n    for (j = 0; (j < i + 1); j++)\n      ret += base64_chars[char_array_4[j]];\n\n    while ((i++ < 3))\n      ret += '=';\n\n  }\n\n  return ret;\n\n}\n\nstd::string base64_decode(std::string const& encoded_string) {\n  size_t in_len = encoded_string.size();\n  int i = 0;\n  int j = 0;\n  int in_ = 0;\n  unsigned char char_array_4[4], char_array_3[3];\n  std::string ret;\n\n  while (in_len-- && (encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {\n    char_array_4[i++] = encoded_string[in_]; in_++;\n    if (i == 4) {\n      for (i = 0; i < 4; i++)\n        char_array_4[i] = base64_chars.find(char_array_4[i]) & 0xff;\n\n      char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);\n      char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);\n      char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];\n\n      for (i = 0; (i < 3); i++)\n        ret += char_array_3[i];\n      i = 0;\n    }\n  }\n\n  if (i) {\n    for (j = 0; j < i; j++)\n      char_array_4[j] = base64_chars.find(char_array_4[j]) & 0xff;\n\n    char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);\n    char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);\n\n    for (j = 0; (j < i - 1); j++) ret += char_array_3[j];\n  }\n\n  return ret;\n}"
  },
  {
    "path": "client/darknet_client/src/base64.hpp",
    "content": "//\n//  base64 encoding and decoding with C++.\n//  Version: 1.01.00\n//\n\n#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A\n#define BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A\n\n#include <string>\n\nstd::string base64_encode(unsigned char const*, unsigned int len);\nstd::string base64_decode(std::string const& s);\n\n#endif /* BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A */"
  },
  {
    "path": "client/darknet_client/src/frame.cpp",
    "content": "#include <sstream>\t// for stringstream\n#include <json-c/json.h>\t// for json\n#include <cstring> // for memcpy\n#include \"frame.hpp\"\n#include \"base64.hpp\"\n\nFrame_pool::Frame_pool()\n{\n  mem_pool_msg = new CMemPool(MEM_POOL_UNIT_NUM, SEQ_BUF_LEN);\n  mem_pool_seq = new CMemPool(MEM_POOL_UNIT_NUM, MSG_BUF_LEN);\n  mem_pool_det = new CMemPool(MEM_POOL_UNIT_NUM, DET_BUF_LEN);\n};\n\nFrame_pool::Frame_pool(int unit_num)\n{\n  mem_pool_msg = new CMemPool(unit_num, SEQ_BUF_LEN);\n  mem_pool_seq = new CMemPool(unit_num, MSG_BUF_LEN);\n  mem_pool_det = new CMemPool(unit_num, DET_BUF_LEN);\n};\n\nFrame_pool::~Frame_pool()\n{\n\n};\n\nFrame Frame_pool::alloc_frame(void) {\n  Frame frame;\n  frame_init(frame);\n  return frame;\n};\n\nvoid Frame_pool::free_frame(Frame& frame) {\n  mem_pool_seq->Free((void *)frame.seq_buf);\n  mem_pool_msg->Free((void *)frame.msg_buf);\n  mem_pool_det->Free((void *)frame.det_buf);\n}\n\nvoid Frame_pool::frame_init(Frame& frame) {\n  frame.seq_len = frame.msg_len = frame.det_len = 0;\n  frame.seq_buf = (unsigned char *)(mem_pool_seq->Alloc(SEQ_BUF_LEN, true));\n  frame.msg_buf = (unsigned char *)(mem_pool_msg->Alloc(MSG_BUF_LEN, true));\n  frame.det_buf = (unsigned char *)(mem_pool_det->Alloc(DET_BUF_LEN, true));\n};\n\nint frame_to_json(void* buf, const Frame& frame) {\n  std::stringstream ss;\n  ss << \"{\\n\\\"seq\\\":\\\"\" << base64_encode((unsigned char *)frame.seq_buf, frame.seq_len) << \"\\\",\\n\"\n    << \"\\\"msg\\\": \\\"\" << base64_encode((unsigned char*)(frame.msg_buf), frame.msg_len) << \"\\\",\\n\"\n    << \"\\\"det\\\": \\\"\" << base64_encode((unsigned char*)(frame.det_buf), frame.det_len)\n    << \"\\\"\\n}\";\n\n  std::memcpy(buf, ss.str().c_str(), ss.str().size());\n  ((unsigned char*)buf)[ss.str().size()] = '\\0';\n  return ss.str().size();\n};\n\nvoid json_to_frame(void* buf, Frame& frame) {\n  json_object *raw_obj;\n  raw_obj = json_tokener_parse((const char*)buf);\n\n  json_object *seq_obj = json_object_object_get(raw_obj, \"seq\");\n  json_object *msg_obj = json_object_object_get(raw_obj, \"msg\");\n  json_object *det_obj = json_object_object_get(raw_obj, \"det\");\n\n\n  std::string seq(base64_decode(json_object_get_string(seq_obj)));\n  std::string msg(base64_decode(json_object_get_string(msg_obj)));\n  std::string det(base64_decode(json_object_get_string(det_obj)));\n\n  frame.seq_len = seq.size();\n  frame.msg_len = msg.size();\n  frame.det_len = det.size();\n\n\n  std::memcpy(frame.seq_buf, seq.c_str(), frame.seq_len);\n  ((unsigned char*)frame.seq_buf)[frame.seq_len] = '\\0';\n\n  std::memcpy(frame.msg_buf, msg.c_str(), frame.msg_len);\n  ((unsigned char*)frame.msg_buf)[frame.msg_len] = '\\0';\n\n  std::memcpy(frame.det_buf, det.c_str(), frame.det_len);\n  ((unsigned char*)frame.det_buf)[frame.det_len] = '\\0';\n\n  //free\n  json_object_put(seq_obj);\n  json_object_put(msg_obj);\n  json_object_put(det_obj);\n};"
  },
  {
    "path": "client/darknet_client/src/frame.hpp",
    "content": "#ifndef __FRAME_HPP\n#define __FRAME_HPP\n#include \"mem_pool.hpp\"\n\nstruct Frame {\n  int seq_len;\n  int msg_len;\n  int det_len;\n  void *seq_buf;\n  void *msg_buf;\n  void *det_buf;\n};\n\nconst int SEQ_BUF_LEN = 100;\nconst int MSG_BUF_LEN = 76800;\nconst int DET_BUF_LEN = 25600;\nconst int JSON_BUF_LEN = MSG_BUF_LEN * 2;\nclass Frame_pool\n{\nprivate:\n  CMemPool *mem_pool_msg;\n  CMemPool *mem_pool_seq;\n  CMemPool *mem_pool_det;\n  const int MEM_POOL_UNIT_NUM = 5000;\n\npublic:\n  Frame_pool();\n  Frame_pool(int unit_num);\n  Frame alloc_frame(void);\n  void free_frame(Frame& frame);\n  void frame_init(Frame& frame);\n  ~Frame_pool();\n};\n\nint frame_to_json(void* buf, const Frame& frame);\nvoid json_to_frame(void* buf, Frame& frame);\n#endif"
  },
  {
    "path": "client/darknet_client/src/main.cpp",
    "content": "#include <zmq.h>\n#include <iostream>\n#include <thread>\n#include <queue>\n#include <chrono>\n#include <string>\n#include <fstream>\n#include <csignal>\n#ifdef __linux__\n#include <tbb/concurrent_priority_queue.h>\n#elif _WIN32\n#include <concurrent_priority_queue.h>\n#endif\n#include <opencv2/opencv.hpp>\n#include \"share_queue.hpp\"\n#include \"frame.hpp\"\n#include \"util.hpp\"\n#include \"args.hpp\"\n\n#ifdef __linux__\n#define FD_SETSIZE 4096\nusing namespace tbb;\n#elif _WIN32\nusing namespace concurrency;\n#endif\nusing namespace cv;\nusing namespace std;\n\n// thread\nvoid fetch_thread(void);\nvoid capture_thread(void);\nvoid recv_thread(void);\nvoid output_show_thread(void);\nvoid input_show_thread(void);\n\nvolatile bool fetch_flag = false;\nvolatile bool exit_flag = false;\nvolatile int final_exit_flag = 0;\n\n// ZMQ\nvoid *context;\nvoid *sock_push;\nvoid *sock_sub;\n\n// pair\nclass ComparePair\n{\npublic:\n  bool operator()(pair<long, Frame> n1, pair<long, Frame> n2) {\n    return n1.first > n2.first;\n  }\n};\nFrame_pool *frame_pool;\nconcurrent_priority_queue<pair<long, Frame>, ComparePair> recv_queue;\n\n// Queue\nSharedQueue<Mat> cap_queue;\nSharedQueue<Mat> fetch_queue;\n\n// opencv\nVideoCapture cap;\nVideoWriter writer;\nstatic Mat mat_show_output;\nstatic Mat mat_show_input;\nstatic Mat mat_recv;\nstatic Mat mat_cap;\nstatic Mat mat_fetch;\n\nconst int cap_width = 640;\nconst int cap_height = 480;\ndouble delay;\nlong volatile show_frame = 1;\ndouble end_frame;\n\n// option\nbool cam_input_flag;\nbool vid_input_flag;\nbool dont_show_flag;\nbool json_output_flag;\nbool vid_output_flag;\n\n// output\nstring out_json_path;\nstring out_vid_path;\n\nofstream out_json_file;\n\nvoid sig_handler(int s)\n{\n  exit_flag = true;\n}\n\nint main(int argc, char *argv[])\n{\n  if (argc < 2) {\n    std::cerr << \"Usage: \" << argv[0] << \" <-addr ADDR> <-cam CAM_NUM | -vid VIDEO_PATH> [-dont_show] [-out_json] [-out_vid] \\n\" << std::endl;\n    return 0;\n  }\n\n  // install signal\n  std::signal(SIGINT, sig_handler);\n\n  // option init\n  int cam_num = find_int_arg(argc, argv, \"-cam\", -1);\n  if (cam_num != -1)\n    cam_input_flag = true;\n\n  const char *vid_def_path = \"./test.mp4\";\n  const char *vid_path = find_char_arg(argc, argv, \"-vid\", vid_def_path);\n  if (vid_path != vid_def_path)\n    vid_input_flag = true;\n\n  dont_show_flag = find_arg(argc, argv, \"-dont_show\");\n  json_output_flag = find_arg(argc, argv, \"-out_json\");\n  vid_output_flag = find_arg(argc, argv, \"-out_vid\");\n\n  // frame_pool init\n  frame_pool = new Frame_pool(5000);\n\n  // ZMQ\n  const char *addr = find_char_arg(argc, argv, \"-addr\", \"127.0.0.1\");\n  context = zmq_ctx_new();\n\n  sock_push = zmq_socket(context, ZMQ_PUSH);\n  zmq_connect(sock_push, ((std::string(\"tcp://\") + addr) + \":5575\").c_str());\n\n  sock_sub = zmq_socket(context, ZMQ_SUB);\n  zmq_connect(sock_sub, ((std::string(\"tcp://\") + addr) + \":5570\").c_str());\n\n  zmq_setsockopt(sock_sub, ZMQ_SUBSCRIBE, \"\", 0);\n\n  if (vid_input_flag) {\n    // VideoCaputre video\n    cap = VideoCapture(vid_path);\n    out_json_path = string(vid_path, strrchr(vid_path, '.')) + \"_output.json\";\n    out_vid_path = string(vid_path, strrchr(vid_path, '.')) + \"_output.mp4\";\n  }\n  else if (cam_input_flag) {\n    // VideoCapture cam\n    cap = VideoCapture(cam_num);\n    cap.set(CAP_PROP_FPS, 20);\n    cap.set(CAP_PROP_BUFFERSIZE, 3);\n    fetch_flag = true;\n    out_json_path = \"./cam_output.json\";\n    out_vid_path = \"./cam_output.mp4\";\n  }\n  else {\n    // error\n    std::cerr << \"Usage: \" << argv[0] << \" <-addr ADDR> <-cam CAM_NUM | -vid VIDEO_PATH> [-dont_show] [-out_json] [-out_vid] \\n\" << std::endl;\n    return 0;\n  }\n\n  if (!cap.isOpened()) {\n    cerr << \"Erro VideoCapture...\\n\";\n    return -1;\n  }\n\n  double fps = cap.get(CAP_PROP_FPS);\n  end_frame = cap.get(CAP_PROP_FRAME_COUNT);\n  delay = cam_input_flag ? 1 : (1000.0 / fps);\n\n  // read frame\n  cap.read(mat_fetch);\n\n  if (mat_fetch.empty()) {\n    cerr << \"Empty Mat Captured...\\n\";\n    return 0;\n  }\n\n  mat_show_output = mat_fetch.clone();\n  mat_show_input = mat_fetch.clone();\n\n  // output init\n  if (json_output_flag) {\n    out_json_file = ofstream(out_json_path);\n    if (!out_json_file.is_open()) {\n      cerr << \"output file : \" << out_json_path << \" open error \\n\";\n      return 0;\n    }\n    out_json_file << \"{\\n \\\"det\\\": [\\n\";\n  }\n\n  if (vid_output_flag) {\n    writer.open(out_vid_path, VideoWriter::fourcc('M', 'P', '4', 'V'), fps, Size(cap_width, cap_height), true);\n    if (!writer.isOpened()) {\n      cerr << \"Erro VideoWriter...\\n\";\n      return -1;\n    }\n  }\n\n  // thread init\n  thread thread_fetch(fetch_thread);\n  thread_fetch.detach();\n\n  while (!fetch_flag);\n\n  thread thread_show_input(output_show_thread);\n  thread thread_show_output(input_show_thread);\n  thread thread_recv(recv_thread);\n  thread thread_capture(capture_thread);\n\n  thread_show_input.detach();\n  thread_show_output.detach();\n  thread_recv.detach();\n  thread_capture.detach();\n\n  while (final_exit_flag)\n  {\n    // for debug\n    cout << \"R : \" << recv_queue.size() << \" | C : \" << cap_queue.size() << \" | F : \" << fetch_queue.size() << \" | T : \" << end_frame << \" : \" << show_frame << endl;\n  }\n\n  cap.release();\n\n  if (json_output_flag) {\n    out_json_file << \"\\n ]\\n}\";\n    out_json_file.close();\n  }\n\n  if (vid_output_flag) {\n    writer.release();\n  }\n\n  delete frame_pool;\n  zmq_close(sock_sub);\n  zmq_close(sock_push);\n  zmq_ctx_destroy(context);\n\n  return 0;\n}\n\n#define FETCH_THRESH 100\n#define FETCH_WAIT_THRESH 30\n#define FETCH_STATE 0\n#define FETCH_WAIT 1\nvoid fetch_thread(void) {\n  volatile int fetch_state = FETCH_STATE;\n  final_exit_flag += 1;\n  while (!exit_flag) {\n\n    switch (fetch_state) {\n    case FETCH_STATE:\n      if (cap.grab()) {\n\n        cap.retrieve(mat_fetch);\n\n        // push fetch queue\n        fetch_queue.push_back(mat_fetch.clone());\n\n        // if cam dont wait\n        if (!cam_input_flag && (fetch_queue.size() > FETCH_THRESH)) {\n          fetch_state = FETCH_WAIT;\n        }\n      }\n      // if fetch end\n      else {\n        final_exit_flag -= 1;\n        return;\n      }\n      break;\n    case FETCH_WAIT:\n      fetch_flag = true;\n      if (fetch_queue.size() < FETCH_WAIT_THRESH) {\n        fetch_state = FETCH_STATE;\n      }\n      break;\n    }\n  }\n  final_exit_flag -= 1;\n}\n\nvoid capture_thread(void) {\n  static vector<int> param = { IMWRITE_JPEG_QUALITY, 50 };\n  static vector<uchar> encode_buf(JSON_BUF_LEN);\n\n  volatile int frame_seq_num = 1;\n  string frame_seq;\n\n  // for json\n  unsigned char json_buf[JSON_BUF_LEN];\n  int send_json_len;\n\n  Frame frame = frame_pool->alloc_frame();\n\n  final_exit_flag += 1;\n  while (!exit_flag) {\n    if (fetch_queue.size() < 1)\n      continue;\n\n    // get input mat \n    mat_cap = fetch_queue.front().clone();\n    fetch_queue.pop_front();\n\n    if (mat_cap.empty()) {\n      cerr << \"Empty Mat Captured...\\n\";\n      continue;\n    }\n\n    // resize\n    resize(mat_cap, mat_cap, Size(cap_width, cap_height));\n\n    // push to cap queue (for display input)\n    cap_queue.push_back(mat_cap.clone());\n\n    // mat to jpg\n    imencode(\".jpg\", mat_cap, encode_buf, param);\n\n    // jpg to json (seq + msg)\n    frame_seq = to_string(frame_seq_num);\n    frame.seq_len = frame_seq.size();\n    memcpy(frame.seq_buf, frame_seq.c_str(), frame.seq_len);\n\n    frame.msg_len = encode_buf.size();\n    memcpy(frame.msg_buf, &encode_buf[0], frame.msg_len);\n\n    send_json_len = frame_to_json(json_buf, frame);\n\n    // send json to server\n    zmq_send(sock_push, json_buf, send_json_len, 0);\n\n    frame_seq_num++;\n  }\n  frame_pool->free_frame(frame);\n  final_exit_flag -= 1;\n}\n\nvoid recv_thread(void) {\n  int recv_json_len;\n  int frame_seq_num = 1;\n  Frame frame;\n  unsigned char json_buf[JSON_BUF_LEN];\n\n  final_exit_flag += 1;\n  while (!exit_flag) {\n    recv_json_len = zmq_recv(sock_sub, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);\n\n    if (recv_json_len > 0) {\n      frame = frame_pool->alloc_frame();\n      json_buf[recv_json_len] = '\\0';\n      json_to_frame(json_buf, frame);\n\n      frame_seq_num = str_to_int((const char *)frame.seq_buf, frame.seq_len);\n\n      // push to recv_queue (for display output)\n      pair<long, Frame> p = make_pair(frame_seq_num, frame);\n      recv_queue.push(p);\n    }\n  }\n  final_exit_flag -= 1;\n}\n\n#define DONT_SHOW 0\n#define SHOW_START 1\n#define DONT_SHOW_THRESH 2  // for buffering\n#define SHOW_START_THRESH 1 // for buffering\n\nint volatile show_state = DONT_SHOW;\nvoid input_show_thread(void) {\n\n  if (!dont_show_flag) {\n    cvNamedWindow(\"INPUT\");\n    moveWindow(\"INPUT\", 30, 130);\n    cv::imshow(\"INPUT\", mat_show_input);\n  }\n\n  final_exit_flag += 1;\n  while (!exit_flag) {\n    switch (show_state) {\n    case DONT_SHOW:\n      break;\n    case SHOW_START:\n      if (cap_queue.size() >= DONT_SHOW_THRESH) {\n        mat_show_input = cap_queue.front().clone();\n        cap_queue.pop_front();\n      }\n      break;\n    }\n\n    if (!dont_show_flag) {\n\n      // draw text (INPUT) Left Upper corner  \n      putText(mat_show_input, \"INPUT\", Point(10, 25),\n        FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255), 2);\n\n      cv::imshow(\"INPUT\", mat_show_input);\n\n      // wait key for exit\n      if (waitKey(delay) >= 0)\n        exit_flag = true;\n    }\n  }\n  final_exit_flag -= 1;\n}\n\nvoid output_show_thread(void) {\n  Frame frame;\n\n  if (!dont_show_flag) {\n    cvNamedWindow(\"OUTPUT\");\n    moveWindow(\"OUTPUT\", 670, 130);\n    cv::imshow(\"OUTPUT\", mat_show_output);\n  }\n\n  final_exit_flag += 1;\n  while (!exit_flag) {\n\n    switch (show_state) {\n    case DONT_SHOW:\n      if (recv_queue.size() >= SHOW_START_THRESH) {\n        show_state = SHOW_START;\n      }\n      break;\n    case SHOW_START:\n      if (recv_queue.size() >= DONT_SHOW_THRESH || (end_frame - show_frame) == 1) {\n        pair<long, Frame> p;\n        // try pop success\n        while (1) {\n          if (recv_queue.try_pop(p)) {\n            // if right sequence\n            if (p.first == show_frame) {\n\n              frame = ((Frame)p.second);\n              vector<uchar> decode_buf((unsigned char*)(frame.msg_buf), (unsigned char*)(frame.msg_buf) + frame.msg_len);\n\n              // jpg to mat\n              mat_show_output = imdecode(decode_buf, IMREAD_COLOR);\n\n              // resize\n              resize(mat_show_output, mat_recv, Size(cap_width, cap_height));\n\n              // wirte out_json\n              if (json_output_flag) {\n                if (show_frame != 1)\n                  out_json_file << \",\\n\";\n                out_json_file.write((const char*)frame.det_buf, frame.det_len);\n              }\n\n              // write out_vid\n              if (vid_output_flag)\n                writer.write(mat_show_output);\n\n              // free frame\n              frame_pool->free_frame(frame);\n              show_frame++;\n            }\n            // wrong sequence\n            else {\n              recv_queue.push(p);\n            }\n            break;\n          }\n        }\n      }\n      else {\n        show_state = DONT_SHOW;\n      }\n      break;\n    }\n\n    if (show_frame == end_frame)\n      exit_flag = true;\n\n    if (!dont_show_flag) {\n      // draw text (OUTPUT) Left Upper corner  \n      putText(mat_show_output, \"OUTPUT\", Point(10, 25),\n        FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0), 2);\n\n      cv::imshow(\"OUTPUT\", mat_show_output);\n\n      // wait key for exit\n      if (waitKey(delay) >= 0)\n        exit_flag = true;\n    }\n  }\n  final_exit_flag -= 1;\n}\n"
  },
  {
    "path": "client/darknet_client/src/mem_pool.cpp",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include \"mem_pool.hpp\"\n/*==========================================================\nCMemPool:\n    Constructor of this class. It allocate memory block from system and create\n    a static double linked list to manage all memory unit.\n\nParameters:\n    [in]ulUnitNum\n    The number of unit which is a part of memory block.\n\n    [in]ulUnitSize\n    The size of unit.\n//=========================================================\n*/\nCMemPool::CMemPool(unsigned long ulUnitNum, unsigned long ulUnitSize) :\n  m_pMemBlock(NULL), m_pAllocatedMemBlock(NULL), m_pFreeMemBlock(NULL),\n  m_ulBlockSize(ulUnitNum * (ulUnitSize + sizeof(struct _Unit))),\n  m_ulUnitSize(ulUnitSize)\n{\n  m_pMemBlock = malloc(m_ulBlockSize);     //Allocate a memory block.\n\n  if (NULL != m_pMemBlock)\n  {\n    for (unsigned long i = 0; i < ulUnitNum; i++)  //Link all mem unit . Create linked list.\n    {\n      struct _Unit *pCurUnit = (struct _Unit *)((char *)m_pMemBlock + i * (ulUnitSize + sizeof(struct _Unit)));\n\n      pCurUnit->pPrev = NULL;\n      pCurUnit->pNext = m_pFreeMemBlock;    //Insert the new unit at head.\n\n      if (NULL != m_pFreeMemBlock)\n      {\n        m_pFreeMemBlock->pPrev = pCurUnit;\n      }\n      m_pFreeMemBlock = pCurUnit;\n    }\n  }\n}\n\n/*===============================================================\n~CMemPool():\n    Destructor of this class. Its task is to free memory block.\n//===============================================================\n*/\nCMemPool::~CMemPool()\n{\n  free(m_pMemBlock);\n}\n\n/*================================================================\nAlloc:\n    To allocate a memory unit. If memory pool can`t provide proper memory unit,\n    It will call system function.\n\nParameters:\n    [in]ulSize\n    Memory unit size.\n\n    [in]bUseMemPool\n    Whether use memory pool.\n\nReturn Values:\n    Return a pointer to a memory unit.\n//=================================================================\n*/\nvoid* CMemPool::Alloc(unsigned long ulSize, bool bUseMemPool)\n{\n  if (ulSize > m_ulUnitSize || false == bUseMemPool ||\n    NULL == m_pMemBlock || NULL == m_pFreeMemBlock)\n  {\n    return malloc(ulSize);\n  }\n\n  //Now FreeList isn`t empty\n  struct _Unit *pCurUnit = m_pFreeMemBlock;\n  m_pFreeMemBlock = pCurUnit->pNext;            //Get a unit from free linkedlist.\n  if (NULL != m_pFreeMemBlock)\n  {\n    m_pFreeMemBlock->pPrev = NULL;\n  }\n\n  pCurUnit->pNext = m_pAllocatedMemBlock;\n\n  if (NULL != m_pAllocatedMemBlock)\n  {\n    m_pAllocatedMemBlock->pPrev = pCurUnit;\n  }\n  m_pAllocatedMemBlock = pCurUnit;\n\n  return (void *)((char *)pCurUnit + sizeof(struct _Unit));\n}\n\n/*================================================================\nFree:\n    To free a memory unit. If the pointer of parameter point to a memory unit,\n    then insert it to \"Free linked list\". Otherwise, call system function \"free\".\n\nParameters:\n    [in]p\n    It point to a memory unit and prepare to free it.\n\nReturn Values:\n    none\n//================================================================\n*/\nvoid CMemPool::Free(void* p)\n{\n  if (m_pMemBlock < p && p < (void *)((char *)m_pMemBlock + m_ulBlockSize))\n  {\n    struct _Unit *pCurUnit = (struct _Unit *)((char *)p - sizeof(struct _Unit));\n\n    m_pAllocatedMemBlock = pCurUnit->pNext;\n    if (NULL != m_pAllocatedMemBlock)\n    {\n      m_pAllocatedMemBlock->pPrev = NULL;\n    }\n\n    pCurUnit->pNext = m_pFreeMemBlock;\n    if (NULL != m_pFreeMemBlock)\n    {\n      m_pFreeMemBlock->pPrev = pCurUnit;\n    }\n\n    m_pFreeMemBlock = pCurUnit;\n  }\n  else\n  {\n    free(p);\n  }\n}"
  },
  {
    "path": "client/darknet_client/src/mem_pool.hpp",
    "content": "#ifndef __MEMPOOL_H__\n#define __MEMPOOL_H__\n// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it\nclass CMemPool\n{\nprivate:\n  //The purpose of the structure`s definition is that we can operate linkedlist conveniently\n  struct _Unit                     //The type of the node of linkedlist.\n  {\n    struct _Unit *pPrev, *pNext;\n  };\n\n  void* m_pMemBlock;                //The address of memory pool.\n\n  //Manage all unit with two linkedlist.\n  struct _Unit*    m_pAllocatedMemBlock; //Head pointer to Allocated linkedlist.\n  struct _Unit*    m_pFreeMemBlock;      //Head pointer to Free linkedlist.\n\n  unsigned long    m_ulUnitSize; //Memory unit size. There are much unit in memory pool.\n  unsigned long    m_ulBlockSize;//Memory pool size. Memory pool is make of memory unit.\n\npublic:\n  CMemPool(unsigned long lUnitNum = 50, unsigned long lUnitSize = 1024);\n  ~CMemPool();\n\n  void* Alloc(unsigned long ulSize, bool bUseMemPool = true); //Allocate memory unit\n  void Free(void* p);                                   //Free memory unit\n};\n\n#endif //__MEMPOOL_H__\n"
  },
  {
    "path": "client/darknet_client/src/share_queue.hpp",
    "content": "#ifndef __SHARE_QUEUE_HPP\n#define __SHARE_QUEUE_HPP\n\n// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe\n#include <queue>\n#include <mutex>\n#include <condition_variable>\n\ntemplate <typename T>\nclass SharedQueue\n{\npublic:\n  SharedQueue();\n  ~SharedQueue();\n\n  T& front();\n  void pop_front();\n\n  void push_back(const T& item);\n  void push_back(T&& item);\n\n  int size();\n  bool empty();\n\nprivate:\n  std::deque<T> queue_;\n  std::mutex mutex_;\n  std::condition_variable cond_;\n};\n\ntemplate <typename T>\nSharedQueue<T>::SharedQueue() {}\n\ntemplate <typename T>\nSharedQueue<T>::~SharedQueue() {}\n\ntemplate <typename T>\nT& SharedQueue<T>::front()\n{\n  std::unique_lock<std::mutex> mlock(mutex_);\n  while (queue_.empty())\n  {\n    cond_.wait(mlock);\n  }\n  return queue_.front();\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::pop_front()\n{\n  std::unique_lock<std::mutex> mlock(mutex_);\n  while (queue_.empty())\n  {\n    cond_.wait(mlock);\n  }\n  queue_.pop_front();\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::push_back(const T& item)\n{\n  std::unique_lock<std::mutex> mlock(mutex_);\n  queue_.push_back(item);\n  mlock.unlock();     // unlock before notificiation to minimize mutex con\n  cond_.notify_one(); // notify one waiting thread\n\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::push_back(T&& item)\n{\n  std::unique_lock<std::mutex> mlock(mutex_);\n  queue_.push_back(std::move(item));\n  mlock.unlock();     // unlock before notificiation to minimize mutex con\n  cond_.notify_one(); // notify one waiting thread\n\n}\n\ntemplate <typename T>\nint SharedQueue<T>::size()\n{\n  std::unique_lock<std::mutex> mlock(mutex_);\n  int size = queue_.size();\n  mlock.unlock();\n  return size;\n}\n#endif"
  },
  {
    "path": "client/darknet_client/src/util.cpp",
    "content": "#include \"util.hpp\"\n\n// utility\nint str_to_int(const char* str, int len)\n{\n  int i;\n  int ret = 0;\n  for (i = 0; i < len; ++i)\n  {\n    ret = ret * 10 + (str[i] - '0');\n  }\n  return ret;\n}"
  },
  {
    "path": "client/darknet_client/src/util.hpp",
    "content": "#ifndef __UTIL_HPP\n#define __UTIL_HPP\n\nint str_to_int(const char* str, int len);\n\n#endif"
  },
  {
    "path": "client/darknet_client.sln",
    "content": "﻿\nMicrosoft Visual Studio Solution File, Format Version 12.00\n# Visual Studio 15\nVisualStudioVersion = 15.0.28307.852\nMinimumVisualStudioVersion = 10.0.40219.1\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"darknet_client\", \"darknet_client\\darknet_client.vcxproj\", \"{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}\"\nEndProject\nGlobal\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\n\t\tDebug|x64 = Debug|x64\n\t\tDebug|x86 = Debug|x86\n\t\tRelease|x64 = Release|x64\n\t\tRelease|x86 = Release|x86\n\tEndGlobalSection\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x64.Build.0 = Debug|x64\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x86.ActiveCfg = Debug|Win32\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x86.Build.0 = Debug|Win32\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x64.ActiveCfg = Release|x64\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x64.Build.0 = Release|x64\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x86.ActiveCfg = Release|Win32\n\t\t{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x86.Build.0 = Release|Win32\n\tEndGlobalSection\n\tGlobalSection(SolutionProperties) = preSolution\n\t\tHideSolutionNode = FALSE\n\tEndGlobalSection\n\tGlobalSection(ExtensibilityGlobals) = postSolution\n\t\tSolutionGuid = {FA14131C-5C1C-41BF-AEB9-201B8E8E0B1E}\n\tEndGlobalSection\nEndGlobal\n"
  },
  {
    "path": "server/Makefile",
    "content": "DEBUG = 1\n\nCPP = g++\nCOMMON = -DOPENCV\nCXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV\nLDFLAGS = -lstdc++ -lpthread -lzmq -lrt -ltbb -ldarknet -lboost_serialization\n\nCXXFLAGS += `pkg-config --cflags json-c`\nCXXFLAGS += `pkg-config --cflags opencv`\n\nLDFLAGS += `pkg-config --libs json-c`\nLDFLAGS += `pkg-config --libs opencv`\n\nifeq ($(DEBUG), 1)\nCOMMON += -DDEBUG\nendif\n\nVPATH = ./src/\nOBJDIR = ./obj/\nDEPS = $(wildcard src/*.h*)\n\nEXEC1 = ventilator\nEXEC1_OBJ = ventilator.o frame.o mem_pool.o base64.o\nEXEC1_OBJS = $(addprefix $(OBJDIR), $(EXEC1_OBJ))\n\nEXEC2 = worker\nEXEC2_OBJ = worker.o people.o pose_detector.o frame.o mem_pool.o base64.o args.o\nEXEC2_OBJS = $(addprefix $(OBJDIR), $(EXEC2_OBJ))\n\nEXEC3 = sink\nEXEC3_OBJ = sink.o people.o Tracker.o Hungarian.o KalmanTracker.o frame.o mem_pool.o base64.o\nEXEC3_OBJS = $(addprefix $(OBJDIR), $(EXEC3_OBJ))\n\nOBJS = $(EXEC1_OBJS) $(EXEC2_OBJS) $(EXEC3_OBJS)\nEXECS = $(EXEC1) $(EXEC2) $(EXEC3)\nINPROCS = processed unprocessed action\n\nall: $(OBJDIR) $(EXECS)\n\n$(EXEC1): $(EXEC1_OBJS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)\n\n$(EXEC2): $(EXEC2_OBJS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)\n\n$(EXEC3): $(EXEC3_OBJS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)\n\n$(OBJDIR)%.o: %.cpp $(DEPS)\n\t$(CPP) $(COMMON) $(CXXFLAGS) -c $< -o $@ \n\n$(OBJDIR):\n\tmkdir -p $(OBJDIR) cfg weights names train\n\nclean:\n\trm -rf $(OBJS) $(EXECS) $(INPROCS)\n"
  },
  {
    "path": "server/action.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport zmq\nimport io\nimport time\nfrom tensorflow import keras\n\nfrom tensorflow.keras.backend import set_session\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf.Session(config=config)\nset_session(sess)\nsess.run(tf.global_variables_initializer())\n\n'''\n## TF 2.0\nfrom tensorflow.compat.v1.keras.backend import set_session\ntf.compat.v1.disable_eager_execution()\nconfig = tf.compat.v1.ConfigProto()  \nconfig.gpu_options.allow_growth = True\nsess = tf.compat.v1.Session(config=config)\nset_session(sess)\nsess.run(tf.compat.v1.global_variables_initializer())\n'''\n\nmodel = keras.models.load_model(\"weights/action.h5\")\n\nn_input = 24  # num input parameters per timestep\nn_steps = 32\nn_hidden = 34 # Hidden layer num of features\nn_classes = 4 \nbatch_size = 1024\n\ndef load_X(msg):\n  buf = io.StringIO(msg)\n  X_ = np.array(\n      [elem for elem in [\n      row.split(',') for row in buf\n      ]], \n      dtype=np.float32\n      )\n  blocks = int(len(X_) / 32)\n  X_ = np.array(np.split(X_,blocks))\n  return X_ \n\n\n# load\ninput_ = np.zeros((batch_size, n_steps, n_input), dtype=np.float32)\nprint(\"model loaded ...\")\n\ncontext = zmq.Context()\nsocket = context.socket(zmq.REP)\nsocket.bind(\"ipc://action\")\n\nwhile True:\n  msg = socket.recv()\n  msg = msg.decode(\"utf-8\")\n  recv_ = load_X(msg)\n\n  for i in range(len(recv_)):\n    input_[i] = recv_[i]\n  startTime = time.time()\n  pred = model.predict_classes(input_, batch_size = batch_size)\n  \n  endTime = time.time() - startTime\n  print(\"time : \", endTime)\n  pred_str = \"\"\n\n  for i in range(len(recv_)):\n    pred_str += str(pred[i])\n  print(\"result : \", pred_str)\n  socket.send_string(pred_str)\n"
  },
  {
    "path": "server/action_train.py",
    "content": "import tensorflow as tf\nimport numpy as np\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf.Session(config=config)\n\n# Set parameter\nn_input = 24  # num input parameters per timestep\nn_steps = 32\nn_hidden = 34 # Hidden layer num of features\nn_classes = 4 \nbatch_size = 1024\nlambda_loss_amount = 0.0015\nlearning_rate = 0.0025\ndecay_rate = 0.02\ntraining_epochs = 300\n\n###\ndef load_X(X_path):\n    file = open(X_path, 'r')\n    X_ = np.array(\n        [elem for elem in [\n            row.split(',') for row in file\n        ]], \n        dtype=np.float32\n    )\n    file.close()\n    blocks = int(len(X_) / n_steps)\n    X_ = np.array(np.split(X_,blocks))\n    return X_ \n\ndef load_y(y_path):\n    file = open(y_path, 'r')\n    y_ = np.array(\n        [elem for elem in [\n            row.replace('  ', ' ').strip().split(' ') for row in file\n        ]], \n        dtype=np.int32\n    )\n    file.close()\n    # for 0-based indexing \n    return y_ - 1\n\n\nDATASET_PATH = \"train/\"\n\nX_train_path = DATASET_PATH + \"pose36.txt\"\nX_test_path = DATASET_PATH + \"pose36_test.txt\"\n\ny_train_path = DATASET_PATH + \"pose36_c.txt\"\ny_test_path = DATASET_PATH + \"pose36_test_c.txt\"\n\n\nX_train = load_X(X_train_path)\nX_test = load_X(X_test_path)\n#print X_test\n\ny_train = load_y(y_train_path)\ny_test = load_y(y_test_path)\n###\n\nmodel = tf.keras.Sequential([\n   # relu activation\n   layers.Dense(n_hidden, activation='relu', \n       kernel_initializer='random_normal', \n       bias_initializer='random_normal',\n       batch_input_shape=(batch_size, n_steps, n_input)\n   ),\n   \n   # cuDNN\n   layers.CuDNNLSTM(n_hidden, return_sequences=True,  unit_forget_bias=1.0),\n   layers.CuDNNLSTM(n_hidden,  unit_forget_bias=1.0),\n   \n   # layers.LSTM(n_hidden, return_sequences=True,  unit_forget_bias=1.0),\n   # layers.LSTM(n_hidden,  unit_forget_bias=1.0),\n\n   layers.Dense(n_classes, kernel_initializer='random_normal', \n       bias_initializer='random_normal',\n       kernel_regularizer=tf.keras.regularizers.l2(lambda_loss_amount),\n       bias_regularizer=tf.keras.regularizers.l2(lambda_loss_amount),\n       activation='softmax'\n   )\n])\n\nmodel.compile(\n   optimizer=tf.keras.optimizers.Adam(lr=learning_rate, decay=decay_rate),\n   metrics=['accuracy'],\n   loss='categorical_crossentropy'\n)\n\ny_train_one_hot = keras.utils.to_categorical(y_train, 4)\ny_test_one_hot = keras.utils.to_categorical(y_test, 4)\n\ntrain_size = X_train.shape[0] - X_train.shape[0] % batch_size\ntest_size = X_test.shape[0] - X_test.shape[0] % batch_size\n\nhistory = model.fit(\n   X_train[:train_size,:,:], \n   y_train_one_hot[:train_size,:], \n   epochs=training_epochs,\n   batch_size=batch_size,\n   validation_data=(X_test[:test_size,:,:], y_test_one_hot[:test_size,:])\n)\n\nfrom tensorflow.keras.models import load_model\nmodel.save('weights/action.h5')\n"
  },
  {
    "path": "server/cfg/openpose.cfg",
    "content": "[net]\nwidth=200\nheight=200\nchannels=3\n\n[convolutional]\nbatch_normalize=0\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=0\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=0\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n#######\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=512\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-6\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=512\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-7,-1,-12\n\n###concat_stage2###\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-8\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-9,-1,-28\n\n###concat_stage3###\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-8\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-9,-1,-44\n\n###concat_stage4###\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-8\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-9,-1,-60\n\n###concat_stage5###\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-8\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-9,-1,-76\n\n###concat_stage6###\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=38\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-8\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=7\nstride=1\npad=3\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=128\nsize=1\nstride=1\npad=0\nactivation=relu\n\n[convolutional]\nbatch_normalize=0\nfilters=19\nsize=1\nstride=1\npad=0\nactivation=linear\n\n[route]\nlayers=-1,-9\n"
  },
  {
    "path": "server/src/DetectorInterface.hpp",
    "content": "#ifndef __DET_INTERFACE\n#define __DET_INTERFACE\n#include <string>\n#include <opencv2/opencv.hpp>\n\nclass DetectorInterface {\npublic:\n  virtual void detect(cv::Mat, float thresh) = 0;\n  virtual void draw(cv::Mat) = 0;\n  virtual std::string det_to_json(int frame_id) = 0;\n};\n  \n#endif\n"
  },
  {
    "path": "server/src/Hungarian.cpp",
    "content": "///////////////////////////////////////////////////////////////////////////////\n// Hungarian.cpp: Implementation file for Class HungarianAlgorithm.\n// \n// This is a C++ wrapper with slight modification of a hungarian algorithm implementation by Markus Buehren.\n// The original implementation is a few mex-functions for use in MATLAB, found here:\n// http://www.mathworks.com/matlabcentral/fileexchange/6543-functions-for-the-rectangular-assignment-problem\n// \n// Both this code and the orignal code are published under the BSD license.\n// by Cong Ma, 2016\n// \n\n#include \"Hungarian.h\"\n\n\nHungarianAlgorithm::HungarianAlgorithm(){}\nHungarianAlgorithm::~HungarianAlgorithm(){}\n\n\n//********************************************************//\n// A single function wrapper for solving assignment problem.\n//********************************************************//\ndouble HungarianAlgorithm::Solve(vector<vector<double> >& DistMatrix, vector<int>& Assignment)\n{\n\tunsigned int nRows = DistMatrix.size();\n\tunsigned int nCols = DistMatrix[0].size();\n\n\tdouble *distMatrixIn = new double[nRows * nCols];\n\tint *assignment = new int[nRows];\n\tdouble cost = 0.0;\n\n\t// Fill in the distMatrixIn. Mind the index is \"i + nRows * j\".\n\t// Here the cost matrix of size MxN is defined as a double precision array of N*M elements. \n\t// In the solving functions matrices are seen to be saved MATLAB-internally in row-order.\n\t// (i.e. the matrix [1 2; 3 4] will be stored as a vector [1 3 2 4], NOT [1 2 3 4]).\n\tfor (unsigned int i = 0; i < nRows; i++)\n\t\tfor (unsigned int j = 0; j < nCols; j++)\n\t\t\tdistMatrixIn[i + nRows * j] = DistMatrix[i][j];\n\t\n\t// call solving function\n\tassignmentoptimal(assignment, &cost, distMatrixIn, nRows, nCols);\n\n\tAssignment.clear();\n\tfor (unsigned int r = 0; r < nRows; r++)\n\t\tAssignment.push_back(assignment[r]);\n\n\tdelete[] distMatrixIn;\n\tdelete[] assignment;\n\treturn cost;\n}\n\n\n//********************************************************//\n// Solve optimal solution for assignment problem using Munkres algorithm, also known as Hungarian Algorithm.\n//********************************************************//\nvoid HungarianAlgorithm::assignmentoptimal(int *assignment, double *cost, double *distMatrixIn, int nOfRows, int nOfColumns)\n{\n\tdouble *distMatrix, *distMatrixTemp, *distMatrixEnd, *columnEnd, value, minValue;\n\tbool *coveredColumns, *coveredRows, *starMatrix, *newStarMatrix, *primeMatrix;\n\tint nOfElements, minDim, row, col;\n\n\t/* initialization */\n\t*cost = 0;\n\tfor (row = 0; row<nOfRows; row++)\n\t\tassignment[row] = -1;\n\n\t/* generate working copy of distance Matrix */\n\t/* check if all matrix elements are positive */\n\tnOfElements = nOfRows * nOfColumns;\n\tdistMatrix = (double *)malloc(nOfElements * sizeof(double));\n\tdistMatrixEnd = distMatrix + nOfElements;\n\n\tfor (row = 0; row<nOfElements; row++)\n\t{\n\t\tvalue = distMatrixIn[row];\n\t\tif (value < 0)\n\t\t\tcerr << \"All matrix elements have to be non-negative.\" << endl;\n\t\tdistMatrix[row] = value;\n\t}\n\n\n\t/* memory allocation */\n\tcoveredColumns = (bool *)calloc(nOfColumns, sizeof(bool));\n\tcoveredRows = (bool *)calloc(nOfRows, sizeof(bool));\n\tstarMatrix = (bool *)calloc(nOfElements, sizeof(bool));\n\tprimeMatrix = (bool *)calloc(nOfElements, sizeof(bool));\n\tnewStarMatrix = (bool *)calloc(nOfElements, sizeof(bool)); /* used in step4 */\n\n\t/* preliminary steps */\n\tif (nOfRows <= nOfColumns)\n\t{\n\t\tminDim = nOfRows;\n\n\t\tfor (row = 0; row<nOfRows; row++)\n\t\t{\n\t\t\t/* find the smallest element in the row */\n\t\t\tdistMatrixTemp = distMatrix + row;\n\t\t\tminValue = *distMatrixTemp;\n\t\t\tdistMatrixTemp += nOfRows;\n\t\t\twhile (distMatrixTemp < distMatrixEnd)\n\t\t\t{\n\t\t\t\tvalue = *distMatrixTemp;\n\t\t\t\tif (value < minValue)\n\t\t\t\t\tminValue = value;\n\t\t\t\tdistMatrixTemp += nOfRows;\n\t\t\t}\n\n\t\t\t/* subtract the smallest element from each element of the row */\n\t\t\tdistMatrixTemp = distMatrix + row;\n\t\t\twhile (distMatrixTemp < distMatrixEnd)\n\t\t\t{\n\t\t\t\t*distMatrixTemp -= minValue;\n\t\t\t\tdistMatrixTemp += nOfRows;\n\t\t\t}\n\t\t}\n\n\t\t/* Steps 1 and 2a */\n\t\tfor (row = 0; row<nOfRows; row++)\n\t\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\t\tif (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON)\n\t\t\t\t\tif (!coveredColumns[col])\n\t\t\t\t\t{\n\t\t\t\t\t\tstarMatrix[row + nOfRows*col] = true;\n\t\t\t\t\t\tcoveredColumns[col] = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t}\n\telse /* if(nOfRows > nOfColumns) */\n\t{\n\t\tminDim = nOfColumns;\n\n\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t{\n\t\t\t/* find the smallest element in the column */\n\t\t\tdistMatrixTemp = distMatrix + nOfRows*col;\n\t\t\tcolumnEnd = distMatrixTemp + nOfRows;\n\n\t\t\tminValue = *distMatrixTemp++;\n\t\t\twhile (distMatrixTemp < columnEnd)\n\t\t\t{\n\t\t\t\tvalue = *distMatrixTemp++;\n\t\t\t\tif (value < minValue)\n\t\t\t\t\tminValue = value;\n\t\t\t}\n\n\t\t\t/* subtract the smallest element from each element of the column */\n\t\t\tdistMatrixTemp = distMatrix + nOfRows*col;\n\t\t\twhile (distMatrixTemp < columnEnd)\n\t\t\t\t*distMatrixTemp++ -= minValue;\n\t\t}\n\n\t\t/* Steps 1 and 2a */\n\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\tfor (row = 0; row<nOfRows; row++)\n\t\t\t\tif (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON)\n\t\t\t\t\tif (!coveredRows[row])\n\t\t\t\t\t{\n\t\t\t\t\t\tstarMatrix[row + nOfRows*col] = true;\n\t\t\t\t\t\tcoveredColumns[col] = true;\n\t\t\t\t\t\tcoveredRows[row] = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\tfor (row = 0; row<nOfRows; row++)\n\t\t\tcoveredRows[row] = false;\n\n\t}\n\n\t/* move to step 2b */\n\tstep2b(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n\n\t/* compute cost and remove invalid assignments */\n\tcomputeassignmentcost(assignment, cost, distMatrixIn, nOfRows);\n\n\t/* free allocated memory */\n\tfree(distMatrix);\n\tfree(coveredColumns);\n\tfree(coveredRows);\n\tfree(starMatrix);\n\tfree(primeMatrix);\n\tfree(newStarMatrix);\n\n\treturn;\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::buildassignmentvector(int *assignment, bool *starMatrix, int nOfRows, int nOfColumns)\n{\n\tint row, col;\n\n\tfor (row = 0; row<nOfRows; row++)\n\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\tif (starMatrix[row + nOfRows*col])\n\t\t\t{\n#ifdef ONE_INDEXING\n\t\t\t\tassignment[row] = col + 1; /* MATLAB-Indexing */\n#else\n\t\t\t\tassignment[row] = col;\n#endif\n\t\t\t\tbreak;\n\t\t\t}\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::computeassignmentcost(int *assignment, double *cost, double *distMatrix, int nOfRows)\n{\n\tint row, col;\n\n\tfor (row = 0; row<nOfRows; row++)\n\t{\n\t\tcol = assignment[row];\n\t\tif (col >= 0)\n\t\t\t*cost += distMatrix[row + nOfRows*col];\n\t}\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::step2a(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)\n{\n\tbool *starMatrixTemp, *columnEnd;\n\tint col;\n\n\t/* cover every column containing a starred zero */\n\tfor (col = 0; col<nOfColumns; col++)\n\t{\n\t\tstarMatrixTemp = starMatrix + nOfRows*col;\n\t\tcolumnEnd = starMatrixTemp + nOfRows;\n\t\twhile (starMatrixTemp < columnEnd){\n\t\t\tif (*starMatrixTemp++)\n\t\t\t{\n\t\t\t\tcoveredColumns[col] = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* move to step 3 */\n\tstep2b(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::step2b(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)\n{\n\tint col, nOfCoveredColumns;\n\n\t/* count covered columns */\n\tnOfCoveredColumns = 0;\n\tfor (col = 0; col<nOfColumns; col++)\n\t\tif (coveredColumns[col])\n\t\t\tnOfCoveredColumns++;\n\n\tif (nOfCoveredColumns == minDim)\n\t{\n\t\t/* algorithm finished */\n\t\tbuildassignmentvector(assignment, starMatrix, nOfRows, nOfColumns);\n\t}\n\telse\n\t{\n\t\t/* move to step 3 */\n\t\tstep3(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n\t}\n\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::step3(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)\n{\n\tbool zerosFound;\n\tint row, col, starCol;\n\n\tzerosFound = true;\n\twhile (zerosFound)\n\t{\n\t\tzerosFound = false;\n\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\tif (!coveredColumns[col])\n\t\t\t\tfor (row = 0; row<nOfRows; row++)\n\t\t\t\t\tif ((!coveredRows[row]) && (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON))\n\t\t\t\t\t{\n\t\t\t\t\t\t/* prime zero */\n\t\t\t\t\t\tprimeMatrix[row + nOfRows*col] = true;\n\n\t\t\t\t\t\t/* find starred zero in current row */\n\t\t\t\t\t\tfor (starCol = 0; starCol<nOfColumns; starCol++)\n\t\t\t\t\t\t\tif (starMatrix[row + nOfRows*starCol])\n\t\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tif (starCol == nOfColumns) /* no starred zero found */\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t/* move to step 4 */\n\t\t\t\t\t\t\tstep4(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim, row, col);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcoveredRows[row] = true;\n\t\t\t\t\t\t\tcoveredColumns[starCol] = false;\n\t\t\t\t\t\t\tzerosFound = true;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t}\n\n\t/* move to step 5 */\n\tstep5(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::step4(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim, int row, int col)\n{\n\tint n, starRow, starCol, primeRow, primeCol;\n\tint nOfElements = nOfRows*nOfColumns;\n\n\t/* generate temporary copy of starMatrix */\n\tfor (n = 0; n<nOfElements; n++)\n\t\tnewStarMatrix[n] = starMatrix[n];\n\n\t/* star current zero */\n\tnewStarMatrix[row + nOfRows*col] = true;\n\n\t/* find starred zero in current column */\n\tstarCol = col;\n\tfor (starRow = 0; starRow<nOfRows; starRow++)\n\t\tif (starMatrix[starRow + nOfRows*starCol])\n\t\t\tbreak;\n\n\twhile (starRow<nOfRows)\n\t{\n\t\t/* unstar the starred zero */\n\t\tnewStarMatrix[starRow + nOfRows*starCol] = false;\n\n\t\t/* find primed zero in current row */\n\t\tprimeRow = starRow;\n\t\tfor (primeCol = 0; primeCol<nOfColumns; primeCol++)\n\t\t\tif (primeMatrix[primeRow + nOfRows*primeCol])\n\t\t\t\tbreak;\n\n\t\t/* star the primed zero */\n\t\tnewStarMatrix[primeRow + nOfRows*primeCol] = true;\n\n\t\t/* find starred zero in current column */\n\t\tstarCol = primeCol;\n\t\tfor (starRow = 0; starRow<nOfRows; starRow++)\n\t\t\tif (starMatrix[starRow + nOfRows*starCol])\n\t\t\t\tbreak;\n\t}\n\n\t/* use temporary copy as new starMatrix */\n\t/* delete all primes, uncover all rows */\n\tfor (n = 0; n<nOfElements; n++)\n\t{\n\t\tprimeMatrix[n] = false;\n\t\tstarMatrix[n] = newStarMatrix[n];\n\t}\n\tfor (n = 0; n<nOfRows; n++)\n\t\tcoveredRows[n] = false;\n\n\t/* move to step 2a */\n\tstep2a(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n}\n\n/********************************************************/\nvoid HungarianAlgorithm::step5(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)\n{\n\tdouble h, value;\n\tint row, col;\n\n\t/* find smallest uncovered element h */\n\th = DBL_MAX;\n\tfor (row = 0; row<nOfRows; row++)\n\t\tif (!coveredRows[row])\n\t\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\t\tif (!coveredColumns[col])\n\t\t\t\t{\n\t\t\t\t\tvalue = distMatrix[row + nOfRows*col];\n\t\t\t\t\tif (value < h)\n\t\t\t\t\t\th = value;\n\t\t\t\t}\n\n\t/* add h to each covered row */\n\tfor (row = 0; row<nOfRows; row++)\n\t\tif (coveredRows[row])\n\t\t\tfor (col = 0; col<nOfColumns; col++)\n\t\t\t\tdistMatrix[row + nOfRows*col] += h;\n\n\t/* subtract h from each uncovered column */\n\tfor (col = 0; col<nOfColumns; col++)\n\t\tif (!coveredColumns[col])\n\t\t\tfor (row = 0; row<nOfRows; row++)\n\t\t\t\tdistMatrix[row + nOfRows*col] -= h;\n\n\t/* move to step 3 */\n\tstep3(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);\n}\n"
  },
  {
    "path": "server/src/Hungarian.h",
    "content": "///////////////////////////////////////////////////////////////////////////////\n// Hungarian.h: Header file for Class HungarianAlgorithm.\n// \n// This is a C++ wrapper with slight modification of a hungarian algorithm implementation by Markus Buehren.\n// The original implementation is a few mex-functions for use in MATLAB, found here:\n// http://www.mathworks.com/matlabcentral/fileexchange/6543-functions-for-the-rectangular-assignment-problem\n// \n// Both this code and the orignal code are published under the BSD license.\n// by Cong Ma, 2016\n// \n\n#include <iostream>\n#include <vector>\n#include <cstdlib>\n#include <cfloat>\n#include <cmath>\n\nusing namespace std;\n\n\nclass HungarianAlgorithm\n{\npublic:\n\tHungarianAlgorithm();\n\t~HungarianAlgorithm();\n\tdouble Solve(vector<vector<double> >& DistMatrix, vector<int>& Assignment);\n\nprivate:\n\tvoid assignmentoptimal(int *assignment, double *cost, double *distMatrix, int nOfRows, int nOfColumns);\n\tvoid buildassignmentvector(int *assignment, bool *starMatrix, int nOfRows, int nOfColumns);\n\tvoid computeassignmentcost(int *assignment, double *cost, double *distMatrix, int nOfRows);\n\tvoid step2a(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);\n\tvoid step2b(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);\n\tvoid step3(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);\n\tvoid step4(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim, int row, int col);\n\tvoid step5(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);\n};\n"
  },
  {
    "path": "server/src/KalmanTracker.cpp",
    "content": "///////////////////////////////////////////////////////////////////////////////\n// KalmanTracker.cpp: KalmanTracker Class Implementation Declaration\n\n#include \"KalmanTracker.h\"\n\n\nint KalmanTracker::kf_count = 0;\n\n\n// initialize Kalman filter\nvoid KalmanTracker::init_kf(StateType stateMat)\n{\n\tint stateNum = 7;\n\tint measureNum = 4;\n\tkf = KalmanFilter(stateNum, measureNum, 0);\n\n\tmeasurement = Mat::zeros(measureNum, 1, CV_32F);\n\n\tkf.transitionMatrix = (Mat_<float>(stateNum, stateNum) <<\n\t\t1, 0, 0, 0, 1, 0, 0,\n\t\t0, 1, 0, 0, 0, 1, 0,\n\t\t0, 0, 1, 0, 0, 0, 1,\n\t\t0, 0, 0, 1, 0, 0, 0,\n\t\t0, 0, 0, 0, 1, 0, 0,\n\t\t0, 0, 0, 0, 0, 1, 0,\n\t\t0, 0, 0, 0, 0, 0, 1);\n\n\tsetIdentity(kf.measurementMatrix);\n\tsetIdentity(kf.processNoiseCov, Scalar::all(1e-2));\n\tsetIdentity(kf.measurementNoiseCov, Scalar::all(1e-1));\n\tsetIdentity(kf.errorCovPost, Scalar::all(1));\n\t\n\t// initialize state vector with bounding box in [cx,cy,s,r] style\n\tkf.statePost.at<float>(0, 0) = stateMat.x + stateMat.width / 2;\n\tkf.statePost.at<float>(1, 0) = stateMat.y + stateMat.height / 2;\n\tkf.statePost.at<float>(2, 0) = stateMat.area();\n\tkf.statePost.at<float>(3, 0) = stateMat.width / stateMat.height;\n}\n\n\n// Predict the estimated bounding box.\nStateType KalmanTracker::predict()\n{\n\t// predict\n\tMat p = kf.predict();\n\tm_age += 1;\n\n\tif (m_time_since_update > 0)\n\t\tm_hit_streak = 0;\n\tm_time_since_update += 1;\n\n\tStateType predictBox = get_rect_xysr(p.at<float>(0, 0), p.at<float>(1, 0), p.at<float>(2, 0), p.at<float>(3, 0));\n\n\tm_history.push_back(predictBox);\n\treturn m_history.back();\n}\n\n\n// Update the state vector with observed bounding box.\nvoid KalmanTracker::update(StateType stateMat)\n{\n\tm_time_since_update = 0;\n\tm_history.clear();\n\tm_hits += 1;\n\tm_hit_streak += 1;\n\n\t// measurement\n\tmeasurement.at<float>(0, 0) = stateMat.x + stateMat.width / 2;\n\tmeasurement.at<float>(1, 0) = stateMat.y + stateMat.height / 2;\n\tmeasurement.at<float>(2, 0) = stateMat.area();\n\tmeasurement.at<float>(3, 0) = stateMat.width / stateMat.height;\n\n\t// update\n\tkf.correct(measurement);\n}\n\n\n// Return the current state vector\nStateType KalmanTracker::get_state()\n{\n\tMat s = kf.statePost;\n\treturn get_rect_xysr(s.at<float>(0, 0), s.at<float>(1, 0), s.at<float>(2, 0), s.at<float>(3, 0));\n}\n\n\n// Convert bounding box from [cx,cy,s,r] to [x,y,w,h] style.\nStateType KalmanTracker::get_rect_xysr(float cx, float cy, float s, float r)\n{\n\tfloat w = sqrt(s * r);\n\tfloat h = s / w;\n\tfloat x = (cx - w / 2);\n\tfloat y = (cy - h / 2);\n\n\tif (x < 0 && cx > 0)\n\t\tx = 0;\n\tif (y < 0 && cy > 0)\n\t\ty = 0;\n\n\treturn StateType(x, y, w, h);\n}\n\n\n\n/*\n// --------------------------------------------------------------------\n// Kalman Filter Demonstrating, a 2-d ball demo\n// --------------------------------------------------------------------\n\nconst int winHeight = 600;\nconst int winWidth = 800;\n\nPoint mousePosition = Point(winWidth >> 1, winHeight >> 1);\n\n// mouse event callback\nvoid mouseEvent(int event, int x, int y, int flags, void *param)\n{\n\tif (event == CV_EVENT_MOUSEMOVE) {\n\t\tmousePosition = Point(x, y);\n\t}\n}\n\nvoid TestKF();\n\nvoid main()\n{\n\tTestKF();\n}\n\n\nvoid TestKF()\n{\n\tint stateNum = 4;\n\tint measureNum = 2;\n\tKalmanFilter kf = KalmanFilter(stateNum, measureNum, 0);\n\n\t// initialization\n\tMat processNoise(stateNum, 1, CV_32F);\n\tMat measurement = Mat::zeros(measureNum, 1, CV_32F);\n\n\tkf.transitionMatrix = *(Mat_<float>(stateNum, stateNum) <<\n\t\t1, 0, 1, 0,\n\t\t0, 1, 0, 1,\n\t\t0, 0, 1, 0,\n\t\t0, 0, 0, 1);\n\n\tsetIdentity(kf.measurementMatrix);\n\tsetIdentity(kf.processNoiseCov, Scalar::all(1e-2));\n\tsetIdentity(kf.measurementNoiseCov, Scalar::all(1e-1));\n\tsetIdentity(kf.errorCovPost, Scalar::all(1));\n\n\trandn(kf.statePost, Scalar::all(0), Scalar::all(winHeight));\n\n\tnamedWindow(\"Kalman\");\n\tsetMouseCallback(\"Kalman\", mouseEvent);\n\tMat img(winHeight, winWidth, CV_8UC3);\n\n\twhile (1)\n\t{\n\t\t// predict\n\t\tMat prediction = kf.predict();\n\t\tPoint predictPt = Point(prediction.at<float>(0, 0), prediction.at<float>(1, 0));\n\n\t\t// generate measurement\n\t\tPoint statePt = mousePosition;\n\t\tmeasurement.at<float>(0, 0) = statePt.x;\n\t\tmeasurement.at<float>(1, 0) = statePt.y;\n\n\t\t// update\n\t\tkf.correct(measurement);\n\n\t\t// visualization\n\t\timg.setTo(Scalar(255, 255, 255));\n\t\tcircle(img, predictPt, 8, CV_RGB(0, 255, 0), -1); // predicted point as green\n\t\tcircle(img, statePt, 8, CV_RGB(255, 0, 0), -1); // current position as red\n\n\t\timshow(\"Kalman\", img);\n\t\tchar code = (char)waitKey(100);\n\t\tif (code == 27 || code == 'q' || code == 'Q')\n\t\t\tbreak;\n\t}\n\tdestroyWindow(\"Kalman\");\n}\n*/\n"
  },
  {
    "path": "server/src/KalmanTracker.h",
    "content": "///////////////////////////////////////////////////////////////////////////////\n// KalmanTracker.h: KalmanTracker Class Declaration\n\n#ifndef KALMAN_H\n#define KALMAN_H 2\n\n#include \"opencv2/video/tracking.hpp\"\n#include \"opencv2/highgui/highgui.hpp\"\n#include \"people.hpp\"\n\nusing namespace std;\nusing namespace cv;\n\n#define StateType Rect_<float>\n\n\n// This class represents the internel state of individual tracked objects observed as bounding box.\nclass KalmanTracker\n{\npublic:\n\tKalmanTracker()\n\t{\n\t\tinit_kf(StateType());\n\t\tm_time_since_update = 0;\n\t\tm_hits = 0;\n\t\tm_hit_streak = 0;\n\t\tm_age = 0;\n\t\tm_id = kf_count;\n\t\t//kf_count++;\n\t}\n\tKalmanTracker(StateType initRect, Person *p = NULL)\n\t{\n\t\tinit_kf(initRect);\n\t\tm_time_since_update = 0;\n\t\tm_hits = 0;\n\t\tm_hit_streak = 0;\n\t\tm_age = 0;\n\t\tm_id = kf_count;\n    m_p = p;\n\t\tkf_count++;\n\t}\n\n\t~KalmanTracker()\n\t{\n\t\tm_history.clear();\n\t}\n\n\tStateType predict();\n\tvoid update(StateType stateMat);\n\t\n\tStateType get_state();\n\tStateType get_rect_xysr(float cx, float cy, float s, float r);\n\n\tstatic int kf_count;\n\n\tint m_time_since_update;\n\tint m_hits;\n\tint m_hit_streak;\n\tint m_age;\n\tint m_id;\n  Person *m_p;\n\nprivate:\n\tvoid init_kf(StateType stateMat);\n\n\tcv::KalmanFilter kf;\n\tcv::Mat measurement;\n\n\tstd::vector<StateType> m_history;\n};\n\n\n\n\n#endif\n"
  },
  {
    "path": "server/src/Tracker.cpp",
    "content": "#include <vector>\n#include <set>\n#include <iterator>\n#include <iostream>\n#include \"Hungarian.h\"\n#include \"Tracker.hpp\"\n\nusing namespace std;\n\n\nTracker::Tracker() {\n  frame_count = 0;\n  max_age = 1;\n  min_hits = 3;\n  iouThreshold = 0.3;\n  trkNum = detNum = 0;\n  KalmanTracker::kf_count = 0; // tracking id relies on this, so we have to reset it in each seq.\n}\n\n// Computes IOU between two bounding boxes\ndouble Tracker::GetIOU(Rect_<float> bb_test, Rect_<float> bb_gt)\n{\n  float in = (bb_test & bb_gt).area();\n  float un = bb_test.area() + bb_gt.area() - in;\n\n  if (un < DBL_EPSILON)\n    return 0;\n\n  return (double)(in / un);\n}\n\nvector<TrackingBox> Tracker::init(vector<TrackingBox> &detections) {\n  frame_count = 1;\n  // initialize kalman trackers using first detections.\n  for (unsigned int i = 0; i < detections.size(); i++)\n  {\n    KalmanTracker trk = KalmanTracker(detections[i].box, detections[i].p);\n    trackers.push_back(trk);\n  }\n\n  // output the first frame detections\n  for (unsigned int id = 0; id < detections.size(); id++)\n  {\n    TrackingBox tb = detections[id];\n    tb.id = id + 1;\n    tb.p->set_id(tb.id);\n    frameTrackingResult.push_back(tb);\n    // \n  }\n  return frameTrackingResult;\n}\n\nvector<TrackingBox> Tracker::update(vector<TrackingBox> &detections) {\n  frame_count++;\n  // 3.1. get predicted locations from existing trackers.\n  predictedBoxes.clear();\n\n  for (auto it = trackers.begin(); it != trackers.end();)\n  {\n    Rect_<float> pBox = (*it).predict();\n    if (pBox.x >= 0 && pBox.y >= 0)\n    {\n      predictedBoxes.push_back(pBox);\n      it++;\n    }\n    else\n    {\n      it = trackers.erase(it);\n      //cerr << \"Box invalid at frame: \" << frame_count << endl;\n    }\n  }\n\n  ///////////////////////////////////////\n  // 3.2. associate detections to tracked object (both represented as bounding boxes)\n  // dets : detFrameData[fi]\n  trkNum = predictedBoxes.size();\n  detNum = detections.size();\n\n  iouMatrix.clear();\n  iouMatrix.resize(trkNum, vector<double>(detNum, 0));\n\n  for (unsigned int i = 0; i < trkNum; i++) // compute iou matrix as a distance matrix\n  {\n    for (unsigned int j = 0; j < detNum; j++)\n    {\n      // use 1-iou because the hungarian algorithm computes a minimum-cost assignment.\n      iouMatrix[i][j] = 1 - GetIOU(predictedBoxes[i], detections[j].box);\n    }\n  }\n\n  // solve the assignment problem using hungarian algorithm.\n  // the resulting assignment is [track(prediction) : detection], with len=preNum\n  HungarianAlgorithm HungAlgo;\n  assignment.clear();\n  HungAlgo.Solve(iouMatrix, assignment);\n\n  // find matches, unmatched_detections and unmatched_predictions\n  unmatchedTrajectories.clear();\n  unmatchedDetections.clear();\n  allItems.clear();\n  matchedItems.clear();\n\n  if (detNum > trkNum) //\tthere are unmatched detections\n  {\n    for (unsigned int n = 0; n < detNum; n++)\n      allItems.insert(n);\n\n    for (unsigned int i = 0; i < trkNum; ++i)\n      matchedItems.insert(assignment[i]);\n\n    set_difference(allItems.begin(), allItems.end(),\n      matchedItems.begin(), matchedItems.end(),\n      insert_iterator<set<int>>(unmatchedDetections, unmatchedDetections.begin()));\n  }\n  else if (detNum < trkNum) // there are unmatched trajectory/predictions\n  {\n    for (unsigned int i = 0; i < trkNum; ++i)\n      if (assignment[i] == -1) // unassigned label will be set as -1 in the assignment algorithm\n        unmatchedTrajectories.insert(i);\n  }\n  else\n    ;\n\n  // filter out matched with low IOU\n  matchedPairs.clear();\n  for (unsigned int i = 0; i < trkNum; ++i)\n  {\n    if (assignment[i] == -1) // pass over invalid values\n      continue;\n    if (1 - iouMatrix[i][assignment[i]] < iouThreshold)\n    {\n      unmatchedTrajectories.insert(i);\n      unmatchedDetections.insert(assignment[i]);\n    }\n    else\n      matchedPairs.push_back(cv::Point(i, assignment[i]));\n  }\n\n  ///////////////////////////////////////\n  // 3.3. updating trackers\n\n  // update matched trackers with assigned detections.\n  // each prediction is corresponding to a tracker\n  int detIdx, trkIdx;\n  for (unsigned int i = 0; i < matchedPairs.size(); i++)\n  {\n    trkIdx = matchedPairs[i].x;\n    detIdx = matchedPairs[i].y;\n    trackers[trkIdx].update(detections[detIdx].box);\n    trackers[trkIdx].m_p->update(detections[detIdx].p);\n  }\n\n  // create and initialise new trackers for unmatched detections\n  for (auto umd : unmatchedDetections)\n  {\n    KalmanTracker tracker = KalmanTracker(detections[umd].box, detections[umd].p);\n    trackers.push_back(tracker);\n  }\n\n  // get trackers' output\n  frameTrackingResult.clear();\n  for (auto it = trackers.begin(); it != trackers.end();)\n  {\n    if (((*it).m_time_since_update < 1) &&\n      ((*it).m_hit_streak >= min_hits || frame_count <= min_hits))\n    {\n      TrackingBox res;\n      res.box = (*it).get_state();\n      res.id = (*it).m_id + 1;\n      res.frame = frame_count;\n\n      // person info update\n      res.p = (*it).m_p;\n      res.p->set_id(res.id);\n      res.p->set_rect(res.box);\n\n      frameTrackingResult.push_back(res);\n      it++;\n    }\n    else\n      it++;\n\n    // remove dead tracklet\n    if (it != trackers.end() && (*it).m_time_since_update > max_age) {\n      // delete person object\n      delete (*it).m_p;\n      it = trackers.erase(it);\n    }\n  }\n\n  return frameTrackingResult;\n}\t// update method end\n\nTracker::~Tracker() {\n\n}\n"
  },
  {
    "path": "server/src/Tracker.hpp",
    "content": "#pragma once\n#include \"KalmanTracker.h\"\n#include <set>\n#include \"people.hpp\"\n\ntypedef struct TrackingBox\n{\n  int frame;\n  int id;\n  Rect_<float> box;\n  Person* p;\n}TrackingBox;\n\nclass Tracker\n{\n\nprivate:\n  int frame_count;\n  int max_age;\n  int min_hits;\n  double iouThreshold;\n\n  unsigned int trkNum;\n  unsigned int detNum;\n\n  vector<KalmanTracker> trackers;\n\n  // variables used in the for-loop\n  vector<Rect_<float> > predictedBoxes;\n  vector<vector<double> > iouMatrix;\n  vector<int> assignment;\n  set<int> unmatchedDetections;\n  set<int> unmatchedTrajectories;\n  set<int> allItems;\n  set<int> matchedItems;\n  vector<cv::Point> matchedPairs;\n  vector<TrackingBox> frameTrackingResult;\n\npublic:\n  Tracker();\n  // Computes IOU between two bounding boxes\n  double GetIOU(Rect_<float> bb_test, Rect_<float> bb_gt);\n  vector<TrackingBox> init(vector<TrackingBox> &detections);\n  vector<TrackingBox> update(vector<TrackingBox> &detections);\n  ~Tracker();\n};\n"
  },
  {
    "path": "server/src/args.cpp",
    "content": "// https://github.com/pjreddie/template\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"args.hpp\"\n\nvoid del_arg(int argc, char **argv, int index)\n{\n  int i;\n  for (i = index; i < argc - 1; ++i) argv[i] = argv[i + 1];\n  argv[i] = 0;\n}\n\nint find_arg(int argc, char* argv[], const char *arg)\n{\n  int i;\n  for (i = 0; i < argc; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      del_arg(argc, argv, i);\n      return 1;\n    }\n  }\n  return 0;\n}\n\nint find_int_arg(int argc, char **argv, const char *arg, int def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = atoi(argv[i + 1]);\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n\nfloat find_float_arg(int argc, char **argv, const char *arg, float def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = atof(argv[i + 1]);\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n\nconst char *find_char_arg(int argc, char **argv, const char *arg, const char *def)\n{\n  int i;\n  for (i = 0; i < argc - 1; ++i) {\n    if (!argv[i]) continue;\n    if (0 == strcmp(argv[i], arg)) {\n      def = argv[i + 1];\n      del_arg(argc, argv, i);\n      del_arg(argc, argv, i);\n      break;\n    }\n  }\n  return def;\n}\n"
  },
  {
    "path": "server/src/args.hpp",
    "content": "// https://github.com/pjreddie/template\n\n#ifndef ARGS_H\n#define ARGS_H\n\nint find_arg(int argc, char* argv[], const char *arg);\nint find_int_arg(int argc, char **argv, const char *arg, int def);\nfloat find_float_arg(int argc, char **argv, const char *arg, float def);\nconst char *find_char_arg(int argc, char **argv, const char *arg, const char *def);\n\n#endif"
  },
  {
    "path": "server/src/base64.cpp",
    "content": "/* \n   base64.cpp and base64.h\n\n   base64 encoding and decoding with C++.\n\n   Version: 1.01.00\n\n   Copyright (C) 2004-2017 René Nyffenegger\n\n   This source code is provided 'as-is', without any express or implied\n   warranty. In no event will the author be held liable for any damages\n   arising from the use of this software.\n\n   Permission is granted to anyone to use this software for any purpose,\n   including commercial applications, and to alter it and redistribute it\n   freely, subject to the following restrictions:\n\n   1. The origin of this source code must not be misrepresented; you must not\n      claim that you wrote the original source code. If you use this source code\n      in a product, an acknowledgment in the product documentation would be\n      appreciated but is not required.\n\n   2. Altered source versions must be plainly marked as such, and must not be\n      misrepresented as being the original source code.\n\n   3. This notice may not be removed or altered from any source distribution.\n\n   René Nyffenegger rene.nyffenegger@adp-gmbh.ch\n\n*/\n\n#include \"base64.h\"\n#include <iostream>\n\nstatic const std::string base64_chars = \n             \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n             \"abcdefghijklmnopqrstuvwxyz\"\n             \"0123456789+/\";\n\n\nstatic inline bool is_base64(unsigned char c) {\n  return (isalnum(c) || (c == '+') || (c == '/'));\n}\n\nstd::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {\n  std::string ret;\n  int i = 0;\n  int j = 0;\n  unsigned char char_array_3[3];\n  unsigned char char_array_4[4];\n\n  while (in_len--) {\n    char_array_3[i++] = *(bytes_to_encode++);\n    if (i == 3) {\n      char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;\n      char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);\n      char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);\n      char_array_4[3] = char_array_3[2] & 0x3f;\n\n      for(i = 0; (i <4) ; i++)\n        ret += base64_chars[char_array_4[i]];\n      i = 0;\n    }\n  }\n\n  if (i)\n  {\n    for(j = i; j < 3; j++)\n      char_array_3[j] = '\\0';\n\n    char_array_4[0] = ( char_array_3[0] & 0xfc) >> 2;\n    char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);\n    char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);\n\n    for (j = 0; (j < i + 1); j++)\n      ret += base64_chars[char_array_4[j]];\n\n    while((i++ < 3))\n      ret += '=';\n\n  }\n\n  return ret;\n\n}\n\nstd::string base64_decode(std::string const& encoded_string) {\n  size_t in_len = encoded_string.size();\n  int i = 0;\n  int j = 0;\n  int in_ = 0;\n  unsigned char char_array_4[4], char_array_3[3];\n  std::string ret;\n\n  while (in_len-- && ( encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {\n    char_array_4[i++] = encoded_string[in_]; in_++;\n    if (i ==4) {\n      for (i = 0; i <4; i++)\n        char_array_4[i] = base64_chars.find(char_array_4[i]) & 0xff;\n\n      char_array_3[0] = ( char_array_4[0] << 2       ) + ((char_array_4[1] & 0x30) >> 4);\n      char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);\n      char_array_3[2] = ((char_array_4[2] & 0x3) << 6) +   char_array_4[3];\n\n      for (i = 0; (i < 3); i++)\n        ret += char_array_3[i];\n      i = 0;\n    }\n  }\n\n  if (i) {\n    for (j = 0; j < i; j++)\n      char_array_4[j] = base64_chars.find(char_array_4[j]) & 0xff;\n\n    char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);\n    char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);\n\n    for (j = 0; (j < i - 1); j++) ret += char_array_3[j];\n  }\n\n  return ret;\n}\n"
  },
  {
    "path": "server/src/base64.h",
    "content": "//\n//  base64 encoding and decoding with C++.\n//  Version: 1.01.00\n//\n\n#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A\n#define BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A\n\n#include <string>\n\nstd::string base64_encode(unsigned char const* , unsigned int len);\nstd::string base64_decode(std::string const& s);\n\n#endif /* BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A */\n"
  },
  {
    "path": "server/src/frame.cpp",
    "content": "#include <string>\n#include <sstream>\n#include <cstring>\n#include \"frame.hpp\"\n#include \"json.h\"\n#include \"base64.h\"\n\nusing namespace std;\n\nFrame_pool::Frame_pool()\n{\n  mem_pool_msg = new CMemPool(MEM_POOL_UNIT_NUM, SEQ_BUF_LEN);\n  mem_pool_seq = new CMemPool(MEM_POOL_UNIT_NUM, MSG_BUF_LEN);\n  mem_pool_det = new CMemPool(MEM_POOL_UNIT_NUM, DET_BUF_LEN);\n};\n\nFrame_pool::Frame_pool(int unit_num)\n{\n  mem_pool_msg = new CMemPool(unit_num, SEQ_BUF_LEN);\n  mem_pool_seq = new CMemPool(unit_num, MSG_BUF_LEN);\n  mem_pool_det = new CMemPool(unit_num, DET_BUF_LEN);\n};\n\nFrame_pool::~Frame_pool()\n{\n\n};\n\nFrame Frame_pool::alloc_frame(void) {\n  Frame frame;\n  frame_init(frame);\n  return frame;\n};\n\nvoid Frame_pool::free_frame(Frame& frame) {\n  mem_pool_seq->Free((void *)frame.seq_buf);\n  mem_pool_msg->Free((void *)frame.msg_buf);\n  mem_pool_det->Free((void *)frame.det_buf);\n}\n\nvoid Frame_pool::frame_init(Frame& frame) {\n  frame.seq_len = frame.msg_len = frame.det_len = 0;\n  frame.seq_buf = (unsigned char *)(mem_pool_seq->Alloc(SEQ_BUF_LEN, true));\n  frame.msg_buf = (unsigned char *)(mem_pool_msg->Alloc(MSG_BUF_LEN, true));\n  frame.det_buf = (unsigned char *)(mem_pool_det->Alloc(DET_BUF_LEN, true));\n};\n\n\n\nint frame_to_json(void* buf, const Frame& frame) {\n\tstringstream ss;\n\tss << \"{\\n\\\"seq\\\":\\\"\" << base64_encode((unsigned char *)frame.seq_buf, frame.seq_len) << \"\\\",\\n\"\n\t\t<< \"\\\"msg\\\": \\\"\" << base64_encode((unsigned char*)(frame.msg_buf), frame.msg_len) << \"\\\",\\n\" \n\t\t<< \"\\\"det\\\": \\\"\" << base64_encode((unsigned char*)(frame.det_buf), frame.det_len) \n    << \"\\\"\\n}\";\n    \n\n\tmemcpy(buf, ss.str().c_str(), ss.str().size());\n\t((unsigned char*)buf)[ss.str().size()] = '\\0';\n\treturn ss.str().size();\n};\n\nvoid json_to_frame(void* buf, Frame& frame) {\n\tjson_object *raw_obj;\n\traw_obj = json_tokener_parse((const char*)buf);\n\n\tjson_object *seq_obj = json_object_object_get(raw_obj, \"seq\");\n\tjson_object *msg_obj = json_object_object_get(raw_obj, \"msg\");\n\tjson_object *det_obj = json_object_object_get(raw_obj, \"det\");\n\n\tstring seq(base64_decode(json_object_get_string(seq_obj)));\n\tstring msg(base64_decode(json_object_get_string(msg_obj)));\n\tstring det(base64_decode(json_object_get_string(det_obj)));\n\n\tframe.seq_len = seq.size();\n\tframe.msg_len = msg.size();\n\tframe.det_len = det.size();\n\n\tmemcpy(frame.seq_buf, seq.c_str(), frame.seq_len);\n\t((unsigned char*)frame.seq_buf)[frame.seq_len] = '\\0';\n\n\tmemcpy(frame.msg_buf, msg.c_str(), frame.msg_len);\n\t((unsigned char*)frame.msg_buf)[frame.msg_len] = '\\0';\n\n  if (frame.det_len > 0) {\n    memcpy(frame.det_buf, det.c_str(), frame.det_len);\n    ((unsigned char*)frame.det_buf)[frame.det_len] = '\\0';\n  }\n\n  // free\n  json_object_put(seq_obj);\n  json_object_put(msg_obj);\n  json_object_put(det_obj);\n};\n"
  },
  {
    "path": "server/src/frame.hpp",
    "content": "#ifndef __FRAME_H\n#define __FRAME_H\n\n#include \"mem_pool.hpp\"\n\nstruct Frame {\n  int seq_len;\n  int msg_len;\n  int det_len;\n  unsigned char *seq_buf;\n  unsigned char *msg_buf;\n  unsigned char *det_buf;\n};\n\nconst int SEQ_BUF_LEN = 100;\nconst int MSG_BUF_LEN = 76800;\nconst int DET_BUF_LEN = 25600;\nconst int JSON_BUF_LEN = MSG_BUF_LEN * 2;\nclass Frame_pool\n{\nprivate:\n  CMemPool *mem_pool_msg;\n  CMemPool *mem_pool_seq;\n  CMemPool *mem_pool_det;\n  const int MEM_POOL_UNIT_NUM = 5000;\n\npublic:\n  Frame_pool();\n  Frame_pool(int unit_num);\n  Frame alloc_frame(void);\n  void free_frame(Frame& frame);\n  void frame_init(Frame& frame);\n  ~Frame_pool();\n};\n\nint frame_to_json(void* buf, const Frame& frame);\nvoid json_to_frame(void* buf, Frame& frame);\n\n#endif\n"
  },
  {
    "path": "server/src/mem_pool.cpp",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include \"mem_pool.hpp\"\n\n/*==========================================================\nCMemPool:\n    Constructor of this class. It allocate memory block from system and create\n    a static double linked list to manage all memory unit.\n\nParameters:\n    [in]ulUnitNum\n    The number of unit which is a part of memory block.\n\n    [in]ulUnitSize\n    The size of unit.\n//=========================================================\n*/\nCMemPool::CMemPool(unsigned long ulUnitNum,unsigned long ulUnitSize) :\n    m_pMemBlock(NULL), m_pAllocatedMemBlock(NULL), m_pFreeMemBlock(NULL), \n    m_ulBlockSize(ulUnitNum * (ulUnitSize+sizeof(struct _Unit))), \n    m_ulUnitSize(ulUnitSize)\n{    \n    m_pMemBlock = malloc(m_ulBlockSize);     //Allocate a memory block.\n    \n    if(NULL != m_pMemBlock)\n    {\n        for(unsigned long i=0; i<ulUnitNum; i++)  //Link all mem unit . Create linked list.\n        {\n            struct _Unit *pCurUnit = (struct _Unit *)( (char *)m_pMemBlock + i*(ulUnitSize+sizeof(struct _Unit)) );\n            \n            pCurUnit->pPrev = NULL;\n            pCurUnit->pNext = m_pFreeMemBlock;    //Insert the new unit at head.\n            \n            if(NULL != m_pFreeMemBlock)\n            {\n                m_pFreeMemBlock->pPrev = pCurUnit;\n            }\n            m_pFreeMemBlock = pCurUnit;\n        }\n    }    \n} \n\n/*===============================================================\n~CMemPool():\n    Destructor of this class. Its task is to free memory block.\n//===============================================================\n*/\nCMemPool::~CMemPool()\n{\n    free(m_pMemBlock);\n}\n\n/*================================================================\nAlloc:\n    To allocate a memory unit. If memory pool can`t provide proper memory unit,\n    It will call system function.\n\nParameters:\n    [in]ulSize\n    Memory unit size.\n\n    [in]bUseMemPool\n    Whether use memory pool.\n\nReturn Values:\n    Return a pointer to a memory unit.\n//=================================================================\n*/\nvoid* CMemPool::Alloc(unsigned long ulSize, bool bUseMemPool)\n{\n    if(    ulSize > m_ulUnitSize || false == bUseMemPool || \n        NULL == m_pMemBlock   || NULL == m_pFreeMemBlock)\n    {\n        return malloc(ulSize);\n    }\n\n    //Now FreeList isn`t empty\n    struct _Unit *pCurUnit = m_pFreeMemBlock;\n    m_pFreeMemBlock = pCurUnit->pNext;            //Get a unit from free linkedlist.\n    if(NULL != m_pFreeMemBlock)\n    {\n        m_pFreeMemBlock->pPrev = NULL;\n    }\n\n    pCurUnit->pNext = m_pAllocatedMemBlock;\n    \n    if(NULL != m_pAllocatedMemBlock)\n    {\n        m_pAllocatedMemBlock->pPrev = pCurUnit; \n    }\n    m_pAllocatedMemBlock = pCurUnit;\n\n    return (void *)((char *)pCurUnit + sizeof(struct _Unit) );\n}\n\n/*================================================================\nFree:\n    To free a memory unit. If the pointer of parameter point to a memory unit,\n    then insert it to \"Free linked list\". Otherwise, call system function \"free\".\n\nParameters:\n    [in]p\n    It point to a memory unit and prepare to free it.\n\nReturn Values:\n    none\n//================================================================\n*/\nvoid CMemPool::Free( void* p )\n{\n    if(m_pMemBlock<p && p<(void *)((char *)m_pMemBlock + m_ulBlockSize) )\n    {\n        struct _Unit *pCurUnit = (struct _Unit *)((char *)p - sizeof(struct _Unit) );\n\n        m_pAllocatedMemBlock = pCurUnit->pNext;\n        if(NULL != m_pAllocatedMemBlock)\n        {\n            m_pAllocatedMemBlock->pPrev = NULL;\n        }\n\n        pCurUnit->pNext = m_pFreeMemBlock;\n        if(NULL != m_pFreeMemBlock)\n        {\n             m_pFreeMemBlock->pPrev = pCurUnit;\n        }\n\n        m_pFreeMemBlock = pCurUnit;\n    }\n    else\n    {\n        free(p);\n    }\n}\n"
  },
  {
    "path": "server/src/mem_pool.hpp",
    "content": "#ifndef __MEMPOOL_H__\n#define __MEMPOOL_H__\n// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it\nclass CMemPool\n{\nprivate:\n    //The purpose of the structure`s definition is that we can operate linkedlist conveniently\n    struct _Unit                     //The type of the node of linkedlist.\n    {\n        struct _Unit *pPrev, *pNext;\n    };\n\n    void* m_pMemBlock;                //The address of memory pool.\n\n    //Manage all unit with two linkedlist.\n    struct _Unit*    m_pAllocatedMemBlock; //Head pointer to Allocated linkedlist.\n    struct _Unit*    m_pFreeMemBlock;      //Head pointer to Free linkedlist.\n\n    unsigned long    m_ulUnitSize; //Memory unit size. There are much unit in memory pool.\n    unsigned long    m_ulBlockSize;//Memory pool size. Memory pool is make of memory unit.\n\npublic:\n    CMemPool(unsigned long lUnitNum = 50, unsigned long lUnitSize = 1024);\n    ~CMemPool();\n    \n    void* Alloc(unsigned long ulSize, bool bUseMemPool = true); //Allocate memory unit\n    void Free( void* p );                                   //Free memory unit\n};\n\n#endif //__MEMPOOL_H__\n"
  },
  {
    "path": "server/src/people.cpp",
    "content": "#include <boost/archive/text_oarchive.hpp>\n#include <boost/archive/text_iarchive.hpp>\n#include <boost/serialization/vector.hpp>\n#include <opencv2/opencv.hpp>\n#include <cfloat>\n#include <iostream>\n#include <cstring>\n#include <cmath>\n#include <string>\n#include <sstream>\n#include <deque>\n#include <queue>\n#include \"people.hpp\"\n\nvoid Person::set_part(int part, float x, float y) {\n  history.front().x[part] = x;\n  history.front().y[part] = y;\n\n  // bounding box update\n  if (x < min_x)\n    min_x = x;\n  else if (x > max_x)\n    max_x = x;\n\n  if (y < min_y)\n    min_y = y;\n  else if (y > max_y)\n    max_y = y;\n}\n\nvoid Person::set_action(int type) {\n  if (type < 0)\n    type = ACTION_TYPE_NUM;\n  action = type;\n\n  if (actions.size() == ACTION_HIS_NUM)\n    actions.pop_front();\n  actions.push_back(type);\n}\n  \nfloat Person::get_dist(float x1, float y1, float x2, float y2) {\n  if (x1 == 0 || x2 == 0)\n    return 0.0;\n  else {\n    return sqrt((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1));\n  }\n}\n\nfloat Person::get_deg(float x1, float y1, float x2, float y2) {\n  if (x1 == 0 || x2 == 0)\n    return 0.0;\n  double dx = x2 - x1;\n  double dy = y2 - y1;\n  double rad = atan2(dy, dx);\n  double degree = (rad * 180) / M_PI;\n  if (degree < 0)\n    degree += 360;\n  return degree;\n}\n\nbool Person::has_output(void) {\n  if (overlap_count <= 0 && history.size() == HIS_NUM) {\n    overlap_count = OVERLAP_NUM;\n    return true;\n  }\n  else \n    return false;\n}\n\nvoid Person::update(Person* n_p) \n{\n  static const int change_part[] = {\n    RELBOW, RWRIST, LELBOW, LWRIST, RKNEE, RANKLE, LKNEE, LANKLE\n  };\n  static const int change_pair[] = {\n    RSHOULDER, RELBOW, \n    RELBOW, RWRIST, \n    LSHOULDER, LELBOW, \n    LELBOW, LWRIST, \n    RHIP, RKNEE, \n    RKNEE, RANKLE, \n    LHIP, LKNEE, \n    LKNEE, LANKLE\n  };\n\n  Change c;\n  double deg, n_deg;\n  int part, pair_1, pair_2;\n  const Joint& j = history.back();\n  const Joint& n_j = n_p->history.front();\n\n  // change calc\n  for (int i = 0; i < CHANGE_NUM; i++) {\n    part = change_part[i];\n    c.dist[i] = abs(get_dist(j.x[part], n_j.x[part], j.y[part], n_j.y[part]));\n\n    pair_1 = change_pair[i * 2];\n    pair_2 = change_pair[i * 2 + 1];\n\n    deg = get_deg(j.x[pair_1], j.y[pair_1], j.x[pair_2], j.y[pair_2]);\n    n_deg = get_deg(n_j.x[pair_1], n_j.y[pair_1], n_j.x[pair_2], n_j.y[pair_2]);\n    c.cur_deg[i] = n_deg;\n    if (deg == 0 || n_deg == 0)\n      c.deg[i] = 0.0;\n    else \n      c.deg[i] = abs(deg - n_deg);\n  }\n\n  if (history.size() == HIS_NUM) {\n    history.pop_front();\n    change_history.pop_front();\n  }\n  history.push_back(n_p->history.front());\n  change_history.push_back(c);\n  overlap_count--;\n\n  // delete\n  delete n_p;\n}\n\n/*\nstd::string get_history(void) const \n{\n  std::stringstream ss;\n  for (int i = 0; i < history.size(); i++) {\n    if (i != 0)\n      ss << '\\n';\n    for (int j = 0; j < JOINT_NUM; j++) {\n      if (j != 0)\n        ss << ',';\n      ss << history[i].x[j] << ',' << history[i].y[j];\n    }\n  }\n  for (int i = history.size(); i < HIS_NUM; i++) {\n    if (i != 0)\n      ss << '\\n';\n    for (int j = 0; j < JOINT_NUM; j++) {\n      if (j != 0)\n        ss << ',';\n      ss << 0.0 << ',' << 0.0;\n    }\n  }\n  return ss.str();\n}\n*/\n\nbool Person::check_crash(const Person& other) const\n{\n  const static int punch_check_joint[] = {RELBOW, RWRIST, LELBOW, LWRIST};\n  const static int kick_check_joint[] = {RKNEE, RANKLE, LKNEE, LANKLE};\n  \n  const int* check_joint = nullptr;\n  int my_action = this->get_action();\n\n  if (my_action == PUNCH)\n    check_joint = punch_check_joint; \n  else if (my_action == KICK)\n    check_joint = kick_check_joint;\n  else\n    return false;\n\n  cv::Rect_<float> other_rect = other.get_rect();\n\n  const Joint& j = history.back();\n  float x, y;\n  for (int i = 0; i < 4; i++) {\n    x = j.x[check_joint[i]];\n    y = j.y[check_joint[i]];\n\n    if (other_rect.x <= x && x <= other_rect.x + other_rect.width &&\n        other_rect.y <= y && y <= other_rect.y + other_rect.height)\n      return true;\n  }\n  return false;\n}\n\ncv::Rect_<float> Person::get_crash_rect(const Person& p) const\n{\n  cv::Rect_<float> a_rect = this->get_rect();\n  cv::Rect_<float> b_rect = p.get_rect();\n\n  float min_x, min_y, max_x, max_y;\n\n  min_x = a_rect.x < b_rect.x ? a_rect.x : b_rect.x;\n  min_y = a_rect.y < b_rect.y ? a_rect.y : b_rect.y;\n  max_x = (a_rect.x + a_rect.width) > (b_rect.x + b_rect.width) ? (a_rect.x + a_rect.width) : (b_rect.x +\n      b_rect.width);\n  max_y = (a_rect.y + a_rect.height) > (b_rect.y + b_rect.height) ? (a_rect.y + a_rect.height) : (b_rect.y +\n      b_rect.height);\n\n  return cv::Rect_<float>(cv::Point_<float>(min_x, min_y),\n      cv::Point_<float>(max_x, max_y));\n}\n\nstd::string Person::get_history(void) const \n{\n  std::stringstream ss;\n  for (size_t i = 0; i < change_history.size(); i++) {\n    if (i != 0)\n      ss << '\\n';\n    for (int j = 0; j < CHANGE_NUM; j++) {\n      if (j != 0)\n        ss << ',';\n      ss << change_history[i].dist[j] << ',' << change_history[i].deg[j] << ',' << change_history[i].cur_deg[j];\n    }\n  }\n  for (int i = change_history.size(); i < HIS_NUM - 1; i++) {\n    if (i != 0)\n      ss << '\\n';\n    for (int j = 0; j < CHANGE_NUM; j++) {\n      if (j != 0)\n        ss << ',';\n      ss << 0.0 << ',' << 0.0 << ',' << 0.0;\n    }\n  }\n  return ss.str();\n}\n\ncv::Rect_<float> Person::get_rect(void) const\n{\n  return cv::Rect_<float>(cv::Point_<float>(min_x, min_y),\n      cv::Point_<float>(max_x, max_y));\n}\n\nvoid Person::set_rect(cv::Rect_<float>& rect) \n{\n  min_x = rect.x;\n  min_y = rect.y;\n  max_x = rect.x + rect.width;\n  max_y = rect.y + rect.height;\n}\n\nPerson& Person::operator=(const Person& p)\n{\n  track_id = p.track_id;\n  max_x = p.max_x;\n  max_y = p.max_y;\n  min_x = p.min_x;\n  min_y = p.min_y;\n  history = p.history;\n  change_history = p.change_history;\n  overlap_count = p.overlap_count;\n  return *this;\n}\n\nstd::ostream& operator<<(std::ostream &out, const Person &p)\n{\n  for (size_t i = 0; i < p.change_history.size(); i++) {\n    for (int j = 0; j < p.CHANGE_NUM; j++) {\n      if (j != 0)\n        out << ',';\n      out << p.change_history[i].dist[j] << ',' <<  p.change_history[i].deg[j] << ',' << p.change_history[i].cur_deg[j];\n    }\n    out << '\\n';\n  }\n  return out;\n}\n\n\nstd::vector<Person*> People::to_person(void) {\n  std::vector<Person*> persons;\n  int person_num = keyshape[0];\n  int part_num = keyshape[1];\n  for (int person = 0; person < person_num; person++) {\n    Person *p = new Person();\n    for (int part = 0; part < part_num; part++) {\n      int index = (person * part_num + part) * keyshape[2];\n      if (keypoints[index + 2] >  thresh) {\n        p->set_part(part, keypoints[index] * scale, keypoints[index + 1] * scale);\n      }\n    }\n    persons.push_back(p);\n  }\n  return persons;\n}\n\nstd::string People::get_output(void) {\n\tstd::string out_str = \"\\\"people\\\": [\\n\";\n\tint person_num = keyshape[0];\n\tint part_num = keyshape[1];\n\tfor (int person = 0; person < person_num; person++) {\n\t\tif (person != 0)\n\t\t\tout_str += \",\\n\";\n\t\tout_str += \" {\\n\";\n\t\tfor (int part = 0; part < part_num; part++) {\n\t\t\tif (part != 0) \n\t\t\t\tout_str += \",\\n \";\n\t\t\tint index = (person * part_num + part) * keyshape[2];\n\t\t\tchar *buf = (char*)calloc(2048, sizeof(char));\n\n\t\t\tif (keypoints[index + 2] >  thresh) {\n\t\t\t\tsprintf(buf, \" \\\"%d\\\":[%f, %f]\", part, keypoints[index] * scale, keypoints[index + 1] * scale);\n\t\t\t}\n\t\t\telse {\n\t\t\t\tsprintf(buf, \" \\\"%d\\\":[%f, %f]\", part, 0.0, 0.0);\n\t\t\t}\n\t\t\tout_str += buf;\n\t\t\tfree(buf);\n\t\t}\n\t\tout_str += \"\\n }\";\n\t}\n\tout_str += \"\\n ]\";\n\treturn out_str;\n}\n\nvoid People::render_pose_keypoints(cv::Mat& frame)\n{\n  const int num_keypoints = keyshape[1];\n  unsigned int pairs[] =\n  {\n    1, 2, 1, 5, 2, 3, 3, 4, 5, 6, 6, 7, 1, 8, 8, 9, 9, 10,\n    1, 11, 11, 12, 12, 13, 1, 0, 0, 14, 14, 16, 0, 15, 15, 17\n  };\n  float colors[] =\n  {\n    255.f, 0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 85.f, 0.f, 255.f, 170.f, 0.f,\n    255.f, 255.f, 0.f, 170.f, 255.f, 0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 0.f,\n    0.f, 255.f, 85.f, 0.f, 255.f, 170.f, 0.f, 255.f, 255.f, 0.f, 170.f, 255.f,\n    0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 255.f, 0.f, 170.f, 170.f, 0.f, 255.f,\n    255.f, 0.f, 255.f, 85.f, 0.f, 255.f\n  };\n  const int pairs_size = sizeof(pairs) / sizeof(unsigned int);\n  const int number_colors = sizeof(colors) / sizeof(float);\n\n  for (int person = 0; person < keyshape[0]; ++person)\n  {\n    // Draw lines\n    for (int pair = 0u; pair < pairs_size; pair += 2)\n    {\n      const int index1 = (person * num_keypoints + pairs[pair]) * keyshape[2];\n      const int index2 = (person * num_keypoints + pairs[pair + 1]) * keyshape[2];\n      if (keypoints[index1 + 2] > thresh && keypoints[index2 + 2] > thresh)\n      {\n        const int color_index = pairs[pair + 1] * 3;\n        cv::Scalar color { colors[(color_index + 2) % number_colors],\n          colors[(color_index + 1) % number_colors],\n          colors[(color_index + 0) % number_colors]};\n        cv::Point keypoint1{ intRoundUp(keypoints[index1] * scale), intRoundUp(keypoints[index1 + 1] * scale) };\n        cv::Point keypoint2{ intRoundUp(keypoints[index2] * scale), intRoundUp(keypoints[index2 + 1] * scale) };\n        cv::line(frame, keypoint1, keypoint2, color, 2);\n      }\n    }\n    // Draw circles\n    for (int part = 0; part < num_keypoints; ++part)\n    {\n      const int index = (person * num_keypoints + part) * keyshape[2];\n      if (keypoints[index + 2] > thresh)\n      {\n        const int color_index = part * 3;\n        cv::Scalar color { colors[(color_index + 2) % number_colors],\n          colors[(color_index + 1) % number_colors],\n          colors[(color_index + 0) % number_colors]};\n        cv::Point center{ intRoundUp(keypoints[index] * scale), intRoundUp(keypoints[index + 1] * scale) };\n        cv::circle(frame, center, 3, color, -1);\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "server/src/people.hpp",
    "content": "#ifndef __PEOPLE\n#define __PEOPLE\n#include <boost/archive/text_oarchive.hpp>\n#include <boost/archive/text_iarchive.hpp>\n#include <boost/serialization/vector.hpp>\n#include <opencv2/opencv.hpp>\n#include <cfloat>\n#include <iostream>\n#include <cstring>\n#include <cmath>\n#include <string>\n#include <sstream>\n#include <deque>\n#include <queue>\n\ntemplate<typename T>\ninline int intRoundUp(const T a)\n{\n    return int(a+0.5f);\n}\n\nclass Person\n{\nprivate:\n  enum { \n    UNTRACK = -1,\n    OVERLAP_NUM = 6, \n    HIS_NUM = 33, \n    CHANGE_NUM = 8, \n\n    // action \n    ACTION_TYPE_NUM = 4, ACTION_HIS_NUM = 10, \n    STAND = 0, WALK = 1, PUNCH = 2, KICK = 3, UNKNOWN = 4,\n\n    // coco joint keypoint\n    JOINT_NUM = 18,\n    NOSE = 0, NECK = 1, RSHOULDER = 2, RELBOW = 3, RWRIST = 4, LSHOULDER = 5, LELBOW = 6,\n    LWRIST = 7, RHIP = 8, RKNEE = 9, RANKLE = 10, LHIP = 11, LKNEE = 12, LANKLE = 13, \n    REYE = 14, LEYE = 15, REAR = 16, LEAR = 17\n  };\n  struct Joint {\n    float x[JOINT_NUM];\n    float y[JOINT_NUM];\n  };\n\n  struct Change {\n    float dist[CHANGE_NUM];\n    float deg[CHANGE_NUM];\n    float cur_deg[CHANGE_NUM];\n  };\n\n  std::deque<Joint> history;\n  std::deque<Change> change_history;\n  std::deque<int> actions;\n\n  int overlap_count;\n  int track_id;\n  int action;\n\n  // for bounding box\n  float max_x;\n  float max_y;\n  float min_x;\n  float min_y;\n\n  Person* enemy;\n\npublic:\n  Person() : enemy(nullptr), overlap_count(0), track_id(UNTRACK), action(ACTION_TYPE_NUM), max_x(FLT_MIN), max_y(FLT_MIN), min_x(FLT_MAX), min_y(FLT_MAX) { history.assign(1, {0}); }\n\n  ~Person() { if (enemy != nullptr) enemy->set_enemy(nullptr); }\n  // set\n  void set_part(int part, float x, float y);\n  void set_id(int id) { track_id = id; }\n  void set_action(int type);  \n  void set_enemy(Person* p) { enemy = p; }\n  void set_rect(cv::Rect_<float>& rect);\n\n  // get\n  inline int get_id(void) const { return track_id; }\n  inline int get_action(void) const { return action; }\n  inline const char* get_action_str(void) const { \n    static const char* action_str[] = {\"STAND\", \"WALK\", \"PUNCH\", \"KICK\", \"UNKNOWN\"};\n    return action_str[action]; \n  }\n  inline const Person* get_enemy(void) const { return enemy; }\n  cv::Rect_<float> get_rect(void) const;\n  cv::Rect_<float> get_crash_rect(const Person& p) const;\n\n  // crash\n  inline bool is_danger(void) const { return (action == PUNCH || action == KICK); }\n  bool check_crash(const Person& p) const;\n\n  void update(Person* n_p);\n  bool has_output(void);\n  std::string get_history(void) const;\n\n  // util\n  float get_dist(float x1, float y1, float x2, float y2); \n  float get_deg(float x1, float y1, float x2, float y2);\n\n  Person& operator=(const Person& p);\n  friend std::ostream& operator<<(std::ostream &out, const Person &p);\n};\n\nclass People\n{\npublic:\n  friend class boost::serialization::access;\n  const float thresh = 0.05;\n  std::vector<float> keypoints;\n  std::vector<int> keyshape;\n  float scale;\n\n  People() {}\n\n  People(std::vector<float> _keypoints, std::vector<int> _keyshape, float _scale) :\n    keypoints(_keypoints), keyshape(_keyshape), scale(_scale) {}\n\n  inline int get_person_num(void) const { return keyshape[0]; };\n\n  std::vector<Person*> to_person(void);\n  std::string get_output(void);\n  void render_pose_keypoints(cv::Mat& frame);\n\n  template<class Archive>\n  void serialize(Archive & ar, const unsigned int version)\n  {\n    ar & keypoints;\n    ar & keyshape;\n    ar & scale;\n  }\n};\n\n#endif\n"
  },
  {
    "path": "server/src/pose_detector.cpp",
    "content": "#include <iostream>\n#include <vector>\nusing namespace std;\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\nusing namespace cv;\n#include \"pose_detector.hpp\"\n\n#define POSE_MAX_PEOPLE 96\n#define NET_OUT_CHANNELS 57 // 38 for pafs, 19 for parts\n\ntemplate<typename T>\ninline int intRound(const T a)\n{\n  return int(a+0.5f);\n}\n\ntemplate<typename T>\ninline T fastMin(const T a, const T b)\n{\n  return (a < b ? a : b);\n}\n\nvoid PoseDetector::connect_bodyparts\n    (\n    vector<float>& pose_keypoints,\n    const float* const map,\n    const float* const peaks,\n    int mapw,\n    int maph,\n    const int inter_min_above_th,\n    const float inter_th,\n    const int min_subset_cnt,\n    const float min_subset_score,\n    vector<int>& keypoint_shape\n    )\n{\n  keypoint_shape.resize(3);\n  const int body_part_pairs[] =\n  {\n    1, 2, 1, 5, 2, 3, 3, 4, 5, 6, 6, 7, 1, 8, 8, 9, 9, 10, 1, 11, 11,\n    12, 12, 13, 1, 0, 0, 14, 14, 16, 0, 15, 15, 17, 2, 16, 5, 17\n  };\n  const int limb_idx[] =\n  {\n    31, 32, 39, 40, 33, 34, 35, 36, 41, 42, 43, 44, 19, 20, 21, 22, 23, 24, 25,\n    26, 27, 28, 29, 30, 47, 48, 49, 50, 53, 54, 51, 52, 55, 56, 37, 38, 45, 46\n  };\n  const int num_body_parts = 18; // COCO part number\n  const int num_body_part_pairs = num_body_parts + 1;\n  std::vector<std::pair<std::vector<int>, double>> subset;\n  const int subset_counter_index = num_body_parts;\n  const int subset_size = num_body_parts + 1;\n  const int peaks_offset = 3 * (POSE_MAX_PEOPLE + 1);\n  const int map_offset = mapw * maph;\n\n  for (unsigned int pair_index = 0u; pair_index < num_body_part_pairs; ++pair_index)\n  {\n    const int body_partA = body_part_pairs[2 * pair_index];\n    const int body_partB = body_part_pairs[2 * pair_index + 1];\n    const float* candidateA = peaks + body_partA*peaks_offset;\n    const float* candidateB = peaks + body_partB*peaks_offset;\n    const int nA = (int)(candidateA[0]); // number of part A candidates\n    const int nB = (int)(candidateB[0]); // number of part B candidates\n\n    // add parts into the subset in special case\n    if (nA == 0 || nB == 0)\n    {\n      // Change w.r.t. other\n      if (nA == 0) // nB == 0 or not\n      {\n        for (int i = 1; i <= nB; ++i)\n        {\n          bool num = false;\n          for (unsigned int j = 0u; j < subset.size(); ++j)\n          {\n            const int off = body_partB*peaks_offset + i * 3 + 2;\n            if (subset[j].first[body_partB] == off)\n            {\n              num = true;\n              break;\n            }\n          }\n          if (!num)\n          {\n            std::vector<int> row_vector(subset_size, 0);\n            // store the index\n            row_vector[body_partB] = body_partB*peaks_offset + i * 3 + 2;\n            // the parts number of that person\n            row_vector[subset_counter_index] = 1;\n            // total score\n            const float subsetScore = candidateB[i * 3 + 2];\n            subset.emplace_back(std::make_pair(row_vector, subsetScore));\n          }\n        }\n      }\n      else // if (nA != 0 && nB == 0)\n      {\n        for (int i = 1; i <= nA; i++)\n        {\n          bool num = false;\n          for (unsigned int j = 0u; j < subset.size(); ++j)\n          {\n            const int off = body_partA*peaks_offset + i * 3 + 2;\n            if (subset[j].first[body_partA] == off)\n            {\n              num = true;\n              break;\n            }\n          }\n          if (!num)\n          {\n            std::vector<int> row_vector(subset_size, 0);\n            // store the index\n            row_vector[body_partA] = body_partA*peaks_offset + i * 3 + 2;\n            // parts number of that person\n            row_vector[subset_counter_index] = 1;\n            // total score\n            const float subsetScore = candidateA[i * 3 + 2];\n            subset.emplace_back(std::make_pair(row_vector, subsetScore));\n          }\n        }\n      }\n    }\n    else // if (nA != 0 && nB != 0)\n    {\n      std::vector<std::tuple<double, int, int>> temp;\n      const int num_inter = 10;\n      // limb PAF x-direction heatmap\n      const float* const mapX = map + limb_idx[2 * pair_index] * map_offset;\n      // limb PAF y-direction heatmap\n      const float* const mapY = map + limb_idx[2 * pair_index + 1] * map_offset;\n      // start greedy algorithm\n      for (int i = 1; i <= nA; i++)\n      {\n        for (int j = 1; j <= nB; j++)\n        {\n          const int dX = candidateB[j * 3] - candidateA[i * 3];\n          const int dY = candidateB[j * 3 + 1] - candidateA[i * 3 + 1];\n          const float norm_vec = float(std::sqrt(dX*dX + dY*dY));\n          // If the peaksPtr are coincident. Don't connect them.\n          if (norm_vec > 1e-6)\n          {\n            const float sX = candidateA[i * 3];\n            const float sY = candidateA[i * 3 + 1];\n            const float vecX = dX / norm_vec;\n            const float vecY = dY / norm_vec;\n            float sum = 0.;\n            int count = 0;\n            for (int lm = 0; lm < num_inter; lm++)\n            {\n              const int mX = fastMin(mapw - 1, intRound(sX + lm*dX / num_inter));\n              const int mY = fastMin(maph - 1, intRound(sY + lm*dY / num_inter));\n              const int idx = mY * mapw + mX;\n              const float score = (vecX*mapX[idx] + vecY*mapY[idx]);\n              if (score > inter_th)\n              {\n                sum += score;\n                ++count;\n              }\n            }\n\n            // parts score + connection score\n            if (count > inter_min_above_th)\n            {\n              temp.emplace_back(std::make_tuple(sum / count, i, j));\n            }\n          }\n        }\n      }\n      // select the top minAB connection, assuming that each part occur only once\n      // sort rows in descending order based on parts + connection score\n      if (!temp.empty())\n      {\n        std::sort(temp.begin(), temp.end(), std::greater<std::tuple<float, int, int>>());\n      }\n      std::vector<std::tuple<int, int, double>> connectionK;\n\n      const int minAB = fastMin(nA, nB);\n      // assuming that each part occur only once, filter out same part1 to different part2\n      std::vector<int> occurA(nA, 0);\n      std::vector<int> occurB(nB, 0);\n      int counter = 0;\n      for (unsigned int row = 0u; row < temp.size(); row++)\n      {\n        const float score = std::get<0>(temp[row]);\n        const int aidx = std::get<1>(temp[row]);\n        const int bidx = std::get<2>(temp[row]);\n        if (!occurA[aidx - 1] && !occurB[bidx - 1])\n        {\n          // save two part score \"position\" and limb mean PAF score\n          connectionK.emplace_back(std::make_tuple(body_partA*peaks_offset + aidx * 3 + 2,\n                body_partB*peaks_offset + bidx * 3 + 2, score));\n          ++counter;\n          if (counter == minAB)\n          {\n            break;\n          }\n          occurA[aidx - 1] = 1;\n          occurB[bidx - 1] = 1;\n        }\n      }\n      // Cluster all the body part candidates into subset based on the part connection\n      // initialize first body part connection\n      if (pair_index == 0)\n      {\n        for (const auto connectionKI : connectionK)\n        {\n          std::vector<int> row_vector(num_body_parts + 3, 0);\n          const int indexA = std::get<0>(connectionKI);\n          const int indexB = std::get<1>(connectionKI);\n          const double score = std::get<2>(connectionKI);\n          row_vector[body_part_pairs[0]] = indexA;\n          row_vector[body_part_pairs[1]] = indexB;\n          row_vector[subset_counter_index] = 2;\n          // add the score of parts and the connection\n          const double subset_score = peaks[indexA] + peaks[indexB] + score;\n          subset.emplace_back(std::make_pair(row_vector, subset_score));\n        }\n      }\n      // Add ears connections (in case person is looking to opposite direction to camera)\n      else if (pair_index == 17 || pair_index == 18)\n      {\n        for (const auto& connectionKI : connectionK)\n        {\n          const int indexA = std::get<0>(connectionKI);\n          const int indexB = std::get<1>(connectionKI);\n          for (auto& subsetJ : subset)\n          {\n            auto& subsetJ_first = subsetJ.first[body_partA];\n            auto& subsetJ_first_plus1 = subsetJ.first[body_partB];\n            if (subsetJ_first == indexA && subsetJ_first_plus1 == 0)\n            {\n              subsetJ_first_plus1 = indexB;\n            }\n            else if (subsetJ_first_plus1 == indexB && subsetJ_first == 0)\n            {\n              subsetJ_first = indexA;\n            }\n          }\n        }\n      }\n      else\n      {\n        if (!connectionK.empty())\n        {\n          for (unsigned int i = 0u; i < connectionK.size(); ++i)\n          {\n            const int indexA = std::get<0>(connectionK[i]);\n            const int indexB = std::get<1>(connectionK[i]);\n            const double score = std::get<2>(connectionK[i]);\n            int num = 0;\n            // if A is already in the subset, add B\n            for (unsigned int j = 0u; j < subset.size(); j++)\n            {\n              if (subset[j].first[body_partA] == indexA)\n              {\n                subset[j].first[body_partB] = indexB;\n                ++num;\n                subset[j].first[subset_counter_index] = subset[j].first[subset_counter_index] + 1;\n                subset[j].second = subset[j].second + peaks[indexB] + score;\n              }\n            }\n            // if A is not found in the subset, create new one and add both\n            if (num == 0)\n            {\n              std::vector<int> row_vector(subset_size, 0);\n              row_vector[body_partA] = indexA;\n              row_vector[body_partB] = indexB;\n              row_vector[subset_counter_index] = 2;\n              const float subsetScore = peaks[indexA] + peaks[indexB] + score;\n              subset.emplace_back(std::make_pair(row_vector, subsetScore));\n            }\n          }\n        }\n      }\n    }\n  }\n\n  // Delete people below thresholds, and save to output\n  int number_people = 0;\n  std::vector<int> valid_subset_indexes;\n  valid_subset_indexes.reserve(fastMin((size_t)POSE_MAX_PEOPLE, subset.size()));\n  for (unsigned int index = 0; index < subset.size(); ++index)\n  {\n    const int subset_counter = subset[index].first[subset_counter_index];\n    const double subset_score = subset[index].second;\n    if (subset_counter >= min_subset_cnt && (subset_score / subset_counter) > min_subset_score)\n    {\n      ++number_people;\n      valid_subset_indexes.emplace_back(index);\n      if (number_people == POSE_MAX_PEOPLE)\n      {\n        break;\n      }\n    }\n  }\n\n  // Fill and return pose_keypoints\n  keypoint_shape = { number_people, (int)num_body_parts, 3 };\n  if (number_people > 0)\n  {\n    pose_keypoints.resize(number_people * (int)num_body_parts * 3);\n  }\n  else\n  {\n    pose_keypoints.clear();\n  }\n  for (unsigned int person = 0u; person < valid_subset_indexes.size(); ++person)\n  {\n    const auto& subsetI = subset[valid_subset_indexes[person]].first;\n    for (int bodyPart = 0u; bodyPart < num_body_parts; bodyPart++)\n    {\n      const int base_offset = (person*num_body_parts + bodyPart) * 3;\n      const int body_part_index = subsetI[bodyPart];\n      if (body_part_index > 0)\n      {\n        pose_keypoints[base_offset] = peaks[body_part_index - 2];\n        pose_keypoints[base_offset + 1] = peaks[body_part_index - 1];\n        pose_keypoints[base_offset + 2] = peaks[body_part_index];\n      }\n      else\n      {\n        pose_keypoints[base_offset] = 0.f;\n        pose_keypoints[base_offset + 1] = 0.f;\n        pose_keypoints[base_offset + 2] = 0.f;\n      }\n    }\n  }\n}\n\nvoid PoseDetector::find_heatmap_peaks\n    (\n    const float *src,\n    float *dst,\n    const int SRCW,\n    const int SRCH,\n    const int SRC_CH,\n    const float TH\n    )\n{\n  // find peaks (8-connected neighbor), weights with 7 by 7 area to get sub-pixel location and response\n  const int SRC_PLANE_OFFSET = SRCW * SRCH;\n  // add 1 for saving total people count, 3 for x, y, score\n  const int DST_PLANE_OFFSET = (POSE_MAX_PEOPLE + 1) * 3;\n  float *dstptr = dst;\n  int c = 0;\n  int x = 0;\n  int y = 0;\n  int i = 0;\n  int j = 0;\n  // TODO: reduce multiplication by using pointer\n  for(c = 0; c < SRC_CH - 1; ++c)\n  {\n    int num_people = 0;\n    for(y = 1; y < SRCH - 1 && num_people != POSE_MAX_PEOPLE; ++y)\n    {\n      for(x = 1; x < SRCW - 1 && num_people != POSE_MAX_PEOPLE; ++x)\n      {\n        int idx  = y * SRCW + x;\n        float value = src[idx];\n        if (value > TH)\n        {\n          const float TOPLEFT = src[idx - SRCW - 1];\n          const float TOP = src[idx - SRCW];\n          const float TOPRIGHT = src[idx - SRCW + 1];\n          const float LEFT = src[idx - 1];\n          const float RIGHT = src[idx + 1];\n          const float BUTTOMLEFT = src[idx + SRCW - 1];\n          const float BUTTOM = src[idx + SRCW];\n          const float BUTTOMRIGHT = src[idx + SRCW + 1];\n          if(value > TOPLEFT && value > TOP && value > TOPRIGHT && value > LEFT &&\n              value > RIGHT && value > BUTTOMLEFT && value > BUTTOM && value > BUTTOMRIGHT)\n          {\n            float x_acc = 0;\n            float y_acc = 0;\n            float score_acc = 0;\n            for (i = -3; i <= 3; ++i)\n            {\n              int ux = x + i;\n              if (ux >= 0 && ux < SRCW)\n              {\n                for (j = -3; j <= 3; ++j)\n                {\n                  int uy = y + j;\n                  if (uy >= 0 && uy < SRCH)\n                  {\n                    float score = src[uy * SRCW + ux];\n                    x_acc += ux * score;\n                    y_acc += uy * score;\n                    score_acc += score;\n                  }\n                }\n              }\n            }\n            x_acc /= score_acc;\n            y_acc /= score_acc;\n            score_acc = value;\n            dstptr[(num_people + 1) * 3 + 0] = x_acc;\n            dstptr[(num_people + 1) * 3 + 1] = y_acc;\n            dstptr[(num_people + 1) * 3 + 2] = score_acc;\n            ++num_people;\n          }\n        }\n      }\n    }\n    dstptr[0] = num_people;\n    src += SRC_PLANE_OFFSET;\n    dstptr += DST_PLANE_OFFSET;\n  }\n}\n\nMat PoseDetector::create_netsize_im\n    (\n    const Mat &im,\n    const int netw,\n    const int neth,\n    float *scale\n    )\n{\n  // for tall image\n  int newh = neth;\n  float s = newh / (float)im.rows;\n  int neww = im.cols * s;\n  if (neww > netw)\n  {\n    //for fat image\n    neww = netw;\n    s = neww / (float)im.cols;\n    newh = im.rows * s;\n  }\n\n  *scale = 1 / s;\n  Rect dst_area(0, 0, neww, newh);\n  Mat dst = Mat::zeros(neth, netw, CV_8UC3);\n  resize(im, dst(dst_area), Size(neww, newh));\n  return dst;\n}\n\nPoseDetector::PoseDetector(const char *cfg_path, const char *weight_path, int gpu_id) : Detector(cfg_path, weight_path,\ngpu_id) {\n  det_people = nullptr;\n  // initialize net\n  net_inw = get_net_width();\n  net_inh = get_net_height();\n  net_outw = get_net_out_width();\n  net_outh = get_net_out_height();\n}\n\nvoid PoseDetector::detect(cv::Mat im, float thresh) {\n  // 3. resize to net input size, put scaled image on the top left\n  float scale = 0.0f;\n  Mat netim = create_netsize_im(im, net_inw, net_inh, &scale);\n\n  // 4. normalized to float type\n  netim.convertTo(netim, CV_32F, 1 / 256.f, -0.5);\n\n  // 5. split channels\n  float *netin_data = new float[net_inw * net_inh * 3]();\n  float *netin_data_ptr = netin_data;\n  vector<Mat> input_channels;\n  for (int i = 0; i < 3; ++i)\n  {\n    Mat channel(net_inh, net_inw, CV_32FC1, netin_data_ptr);\n    input_channels.emplace_back(channel);\n    netin_data_ptr += (net_inw * net_inh);\n  }\n  split(netim, input_channels);\n\n  // 6. feed forward\n  double time_begin = getTickCount();\n  float *netoutdata = Detector::predict(netin_data);\n  double fee_time = (getTickCount() - time_begin) / getTickFrequency() * 1000;\n#ifdef DEBUG\n  cout << \"forward fee: \" << fee_time << \"ms\" << endl;\n#endif\n  // 7. resize net output back to input size to get heatmap\n  float *heatmap = new float[net_inw * net_inh * NET_OUT_CHANNELS];\n  for (int i = 0; i < NET_OUT_CHANNELS; ++i)\n  {\n    Mat netout(net_outh, net_outw, CV_32F, (netoutdata + net_outh*net_outw*i));\n    Mat nmsin(net_inh, net_inw, CV_32F, heatmap + net_inh*net_inw*i);\n    resize(netout, nmsin, Size(net_inw, net_inh), 0, 0, CV_INTER_CUBIC);\n  }\n\n  // 8. get heatmap peaks\n  float *heatmap_peaks = new float[3 * (POSE_MAX_PEOPLE+1) * (NET_OUT_CHANNELS-1)];\n  find_heatmap_peaks(heatmap, heatmap_peaks, net_inw, net_inh, NET_OUT_CHANNELS, 0.05);\n\n  // 9. link parts\n  vector<float> keypoints;\n  vector<int> shape;\n  connect_bodyparts(keypoints, heatmap, heatmap_peaks, net_inw, net_inh, 9, 0.05, 6, 0.4, shape);\n\n  delete [] heatmap_peaks;\n  delete [] heatmap;\n  delete [] netin_data;\n\n  // people\n  if (det_people != nullptr)\n    delete det_people;\n  det_people = new People(keypoints, shape, scale);\n}\n\nvoid PoseDetector::draw(cv::Mat mat)\n{\n  det_people->render_pose_keypoints(mat);\n}\n\nstd::string PoseDetector::det_to_json(int frame_id)\n{\n  std::string out_str;\n  char* tmp_buf = (char *)calloc(1024, sizeof(char));\n  sprintf(tmp_buf, \"{\\n \\\"frame_id\\\":%d, \\n \", frame_id);\n  out_str = tmp_buf;\n  out_str += det_people->get_output();\n  out_str += \"\\n}\";\n  free(tmp_buf);\n  return out_str;\n}\n\nPoseDetector::~PoseDetector() {\n}\n"
  },
  {
    "path": "server/src/pose_detector.hpp",
    "content": "#ifndef __POSE_DETECTOR\n#define __POSE_DETECTOR\n#include <vector>\n#include <string>\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#include \"yolo_v2_class.hpp\"\n#include \"DetectorInterface.hpp\"\n#include \"people.hpp\"\n\nclass PoseDetector : public Detector, public DetectorInterface\n{\n  private:\n    int net_inw;\n    int net_inh;\n    int net_outw;\n    int net_outh;\n    People* det_people;\n  public:\n    PoseDetector(const char *cfg_path, const char *weight_path, int gpu_id);\n    ~PoseDetector();\n    inline People* get_people(void) { return det_people; }\n    virtual void detect(cv::Mat mat, float thresh);\n    virtual void draw(cv::Mat mat);\n    virtual std::string det_to_json(int frame_id);\n  private:\n    void connect_bodyparts\n      (\n       std::vector<float>& pose_keypoints,\n       const float* const map,\n       const float* const peaks,\n       int mapw,\n       int maph,\n       const int inter_min_above_th,\n       const float inter_th,\n       const int min_subset_cnt,\n       const float min_subset_score,\n       std::vector<int>& keypoint_shape\n      );\n\n    void find_heatmap_peaks\n      (\n       const float *src,\n       float *dst,\n       const int SRCW,\n       const int SRCH,\n       const int SRC_CH,\n       const float TH\n      );\n\n    cv::Mat create_netsize_im\n      (\n       const cv::Mat &im,\n       const int netw,\n       const int neth,\n       float *scale\n      );\n};\n\n#endif\n"
  },
  {
    "path": "server/src/share_queue.h",
    "content": "#pragma once\n// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe\n#include <deque>\n#include <mutex>\n#include <condition_variable>\n\ntemplate <typename T>\nclass SharedQueue\n{\npublic:\n\tSharedQueue();\n\t~SharedQueue();\n\n\tT& front();\n\tvoid pop_front();\n\n\tvoid push_back(const T& item);\n\tvoid push_back(T&& item);\n\n\tint size();\n\tbool empty();\n\nprivate:\n\tstd::deque<T> queue_;\n\tstd::mutex mutex_;\n\tstd::condition_variable cond_;\n};\n\ntemplate <typename T>\nSharedQueue<T>::SharedQueue() {}\n\ntemplate <typename T>\nSharedQueue<T>::~SharedQueue() {}\n\ntemplate <typename T>\nT& SharedQueue<T>::front()\n{\n\tstd::unique_lock<std::mutex> mlock(mutex_);\n\twhile (queue_.empty())\n\t{\n\t\tcond_.wait(mlock);\n\t}\n\treturn queue_.front();\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::pop_front()\n{\n\tstd::unique_lock<std::mutex> mlock(mutex_);\n\twhile (queue_.empty())\n\t{\n\t\tcond_.wait(mlock);\n\t}\n\tqueue_.pop_front();\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::push_back(const T& item)\n{\n\tstd::unique_lock<std::mutex> mlock(mutex_);\n\tqueue_.push_back(item);\n\tmlock.unlock();     // unlock before notificiation to minimize mutex con\n\tcond_.notify_one(); // notify one waiting thread\n\n}\n\ntemplate <typename T>\nvoid SharedQueue<T>::push_back(T&& item)\n{\n\tstd::unique_lock<std::mutex> mlock(mutex_);\n\tqueue_.push_back(std::move(item));\n\tmlock.unlock();     // unlock before notificiation to minimize mutex con\n\tcond_.notify_one(); // notify one waiting thread\n\n}\n\ntemplate <typename T>\nint SharedQueue<T>::size()\n{\n\tstd::unique_lock<std::mutex> mlock(mutex_);\n\tint size = queue_.size();\n\tmlock.unlock();\n\treturn size;\n}\n"
  },
  {
    "path": "server/src/sink.cpp",
    "content": "#include <zmq.h>\n#include <stdio.h>\n#include <iostream>\n#include <sstream>\n#include <fstream>\n#include <string>\n#include <assert.h>\n#include <map>\n#include <csignal>\n#include <boost/archive/text_oarchive.hpp>\n#include <tbb/concurrent_hash_map.h>\n#include <opencv2/opencv.hpp>\n#include \"share_queue.h\"\n#include \"people.hpp\"\n#include \"Tracker.hpp\"\n#include \"frame.hpp\"\n\n// ZMQ\nvoid *sock_pull;\nvoid *sock_pub;\nvoid *sock_rnn;\n\n// ShareQueue\ntbb::concurrent_hash_map<int, Frame> frame_map;\nSharedQueue<Frame> processed_frame_queue;\n\n// pool\nFrame_pool *frame_pool;\n\n// signal\nvolatile bool exit_flag = false;\nvoid sig_handler(int s)\n{\n  exit_flag = true;\n}\n\nvoid *recv_in_thread(void *ptr)\n{\n  int recv_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n\n  while(!exit_flag) {\n    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);\n\n    if (recv_json_len > 0) {\n      frame = frame_pool->alloc_frame();\n      json_buf[recv_json_len] = '\\0';\n      json_to_frame(json_buf, frame);\n\n#ifdef DEBUG\n      std::cout << \"Sink | Recv From Worker | SEQ : \" << frame.seq_buf\n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n\n      tbb::concurrent_hash_map<int, Frame>::accessor a;\n      while(1)\n      {\n        if(frame_map.insert(a, atoi((char *)frame.seq_buf))) {\n          a->second = frame;\n          break;\n        }\n      }\n    }\n  }\n}\n\nvoid *send_in_thread(void *ptr)\n{\n  int send_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n\n  while(!exit_flag) {\n    if (processed_frame_queue.size() > 0) {\n      frame = processed_frame_queue.front();\n      processed_frame_queue.pop_front();\n\n#ifdef DEBUG\n      std::cout << \"Sink | Pub To Client | SEQ : \" << frame.seq_buf\n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n\n      send_json_len = frame_to_json(json_buf, frame);\n      zmq_send(sock_pub, json_buf, send_json_len, 0);\n\n      frame_pool->free_frame(frame);\n    }\n  }\n}\n\nint main()\n{\n  // ZMQ\n  int ret;\n  void *context = zmq_ctx_new();\n\n  sock_pull = zmq_socket(context, ZMQ_PULL);\n  ret = zmq_bind(sock_pull, \"ipc://processed\");\n  assert(ret != -1);\n\n  sock_pub = zmq_socket(context, ZMQ_PUB);\n  ret = zmq_bind(sock_pub, \"tcp://*:5570\");\n  assert(ret != -1);\n\n  sock_rnn = zmq_socket(context, ZMQ_REQ);\n  ret = zmq_connect(sock_rnn, \"ipc://action\");\n  assert(ret != -1);\n\n  // frame_pool\n  frame_pool = new Frame_pool(5000);\n\n  // Thread\n  pthread_t recv_thread;\n  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_t send_thread;\n  if (pthread_create(&send_thread, 0, send_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_detach(send_thread);\n  pthread_detach(recv_thread);\n\n  // serialize \n  std::stringstream ss;\n  std::stringbuf *pbuf = ss.rdbuf();\n\n  // frame\n  Frame frame;\n  int frame_len;\n  unsigned char *frame_buf_ptr;\n  char json_tmp_buf[1024];\n\n  // Tracker\n  Tracker tracker;\n  TrackingBox tb;\n  std::vector<TrackingBox> det_data;\n  std::vector<TrackingBox> track_data;\n  volatile int track_frame = 1;\n  int init_flag = 0;\n\n  // person\n  std::vector<Person*> person_data;\n  std::stringstream p_ss;\n  std::ofstream output_file(\"output.txt\");\n\n  // rnn\n  unsigned char rnn_buf[100];\n\n  // draw\n  std::vector<int> param = {cv::IMWRITE_JPEG_QUALITY, 60 };\n  const int CNUM = 20;\n  cv::RNG rng(0xFFFFFFFF);\n  cv::Scalar_<int> randColor[CNUM];\n  for (int i = 0; i < CNUM; i++)\n    rng.fill(randColor[i], cv::RNG::UNIFORM, 0, 256);\n\n  while(!exit_flag) {\n    if (!frame_map.empty()) {\n      tbb::concurrent_hash_map<int, Frame>::accessor c_a;\n\n      if (frame_map.find(c_a, (const int)track_frame))\n      {\n        frame = (Frame)c_a->second;\n        while(1) {\n          if (frame_map.erase(c_a))\n            break;\n        }\n\n        // unsigned char array -> vector\n        frame_len = frame.msg_len;\n        frame_buf_ptr = frame.msg_buf;\n        std::vector<unsigned char> raw_vec(frame_buf_ptr, frame_buf_ptr + frame_len);\n        \n        // vector -> mat\n        cv::Mat raw_mat = cv::imdecode(cv::Mat(raw_vec), 1);\n\n        // get people & unserialize\n        ss.str(\"\"); // ss clear\n        pbuf->sputn((const char*)frame.det_buf, frame.det_len);\n        People people;\n        {\n          boost::archive::text_iarchive ia(ss);\n          ia >> people;\n        }   \n\n        // detect people result to json\n        std::string det_json;\n        sprintf(json_tmp_buf, \"{\\n \\\"frame_id\\\":%d, \\n \", track_frame);\n        det_json = json_tmp_buf;\n        det_json += people.get_output();\n        det_json += \"\\n}\";\n\n        frame.det_len = det_json.size();\n        memcpy(frame.det_buf, det_json.c_str(), frame.det_len);\n        frame.det_buf[frame.det_len] = '\\0';\n\n        // draw people skeleton\n        people.render_pose_keypoints(raw_mat);\n\n        // people to person\n        person_data.clear();\n        person_data = people.to_person(); \n\n        // ready to track\n        det_data.clear();\n        for (auto it = person_data.begin(); it != person_data.end(); it++) {\n          tb.frame = track_frame;\n          tb.box = (*it)->get_rect();\n          tb.p = (*it);\n          det_data.push_back(tb);\n        }\n\n        // not detect people\n        if (det_data.size() < 1) {\n          processed_frame_queue.push_back(frame);\n          track_frame++;\n          continue;\n        }\n\n        // Track\n        track_data.clear();\n        if (init_flag == 0) {\n          track_data = tracker.init(det_data);\n          init_flag = 1;\n        }\n        else {\n          track_data = tracker.update(det_data);\n        }\n  \n        p_ss.str(\"\");\n        for (unsigned int i = 0; i < track_data.size(); i++) {\n          // get person \n          Person* track_person = track_data[i].p;\n          \n          // get history\n          if (i != 0)\n            p_ss << '\\n';\n          p_ss << track_person->get_history();\n\n          // get_output for train\n          if (track_person->has_output()) {\n            output_file << *(track_person);\n          }\n        }\n        \n        \n        // RNN action\n        double time_begin = getTickCount(); \n\n        std::string hists = p_ss.str();\n        if (hists.size() > 0) {\n          zmq_send(sock_rnn, hists.c_str(), hists.size(), 0);\n          zmq_recv(sock_rnn, rnn_buf, 100, 0);\n\n          // action update\n          for (unsigned int i = 0; i < track_data.size(); i++) {\n            track_data[i].p->set_action(rnn_buf[i] - '0');\n          }\n        }  \n\n        double fee_time = (getTickCount() - time_begin) / getTickFrequency() * 1000;\n#ifdef DEBUG\n        std::cout << \"RNN fee: \" << fee_time << \"ms | T : \" << track_frame << std::endl;\n#endif\n\n        // check crash\n        for (unsigned int i = 0; i < track_data.size(); i++) {\n          Person* me = track_data[i].p;\n          if (me->is_danger()) { // punch or kick\n            for (unsigned int j = 0; j < track_data.size(); j++) {\n              if (j == i)\n                continue;\n              Person* other = track_data[j].p;\n\n              // if me and other crash\n              if (me->check_crash(*other)) {\n                //std::cout << \"CRASH !!! \" << me->get_id() << \" : \" << other->get_id() << std::endl;\n                me->set_enemy(other);\n                other->set_enemy(me);\n              }\n            }\n          }\n\n          // draw fight bounding box\n          const Person *my_enemy = me->get_enemy();\n          if (my_enemy != nullptr) {\n            cv::Rect_<float> crash_rect = me->get_crash_rect(*my_enemy);\n\n            cv::putText(raw_mat, \"FIGHT\", cv::Point(crash_rect.x, crash_rect.y), cv::FONT_HERSHEY_DUPLEX, 0.8, cv::Scalar(0, 255, 0), 2);\n            cv::rectangle(raw_mat, crash_rect, cv::Scalar(0, 255, 0), 2, 8, 0);\n          }\n        }\n        \n        \n        // draw Track data\n        for (auto td: track_data) {\n          // draw track_id\n          cv::putText(raw_mat, to_string(td.id), cv::Point(td.box.x, td.box.y), cv::FONT_HERSHEY_DUPLEX, 0.8, randColor[td.id % CNUM], 2);\n          // draw person bounding box\n          cv::putText(raw_mat, td.p->get_action_str(), cv::Point(td.box.x, td.box.y + 30), cv::FONT_HERSHEY_DUPLEX, 0.8,\n          randColor[td.id % CNUM], 2);\n          cv::rectangle(raw_mat, td.box, randColor[td.id % CNUM], 2, 8, 0);\n        }\n\n        // draw_mat -> vector\n        std::vector<unsigned char> res_vec;\n        cv::imencode(\".jpg\", raw_mat, res_vec, param);\n\n        // vector -> frame array\n        frame.msg_len = res_vec.size();\n        std::copy(res_vec.begin(), res_vec.end(), frame.msg_buf);\n\n        processed_frame_queue.push_back(frame);\n        track_frame++;\n      }\n    }\n  }\n\n  delete frame_pool;\n\n  output_file.close();\n\n  zmq_close(sock_pull);\n  zmq_close(sock_pub);\n  zmq_close(sock_rnn);\n\n  zmq_ctx_destroy(context);\n  return 0;\n}\n"
  },
  {
    "path": "server/src/ventilator.cpp",
    "content": "#include <zmq.h>\n#include <iostream>\n#include <cassert>\n#include <pthread.h>\n#include <csignal>\n#include \"share_queue.h\"\n#include \"frame.hpp\"\n\n// ZMQ\nvoid *sock_pull;\nvoid *sock_push;\n\n// ShareQueue\nSharedQueue<Frame> frame_queue;\n\n// pool\nFrame_pool *frame_pool;\n\n// signal\nvolatile bool exit_flag = false;\nvoid sig_handler(int s)\n{\n  exit_flag = true;\n}\n\nvoid *recv_in_thread(void *ptr)\n{\n  int recv_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n\n  while(!exit_flag) {\n    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);\n\n    if (recv_json_len > 0) {\n      frame = frame_pool->alloc_frame();\n      json_buf[recv_json_len] = '\\0';\n      json_to_frame(json_buf, frame);\n#ifdef DEBUG\n      std::cout << \"Ventilator | Recv From Client | SEQ : \" << frame.seq_buf \n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n      frame_queue.push_back(frame);\n    }\n  }\n}\n\nvoid *send_in_thread(void *ptr)\n{\n  int send_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n  while(!exit_flag) {\n    if (frame_queue.size() > 0) {\n      frame = frame_queue.front();\n      frame_queue.pop_front();\n\n#ifdef DEBUG\n      std::cout << \"Ventilator | Send To Worker | SEQ : \" << frame.seq_buf \n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n\n      send_json_len = frame_to_json(json_buf, frame);\n      zmq_send(sock_push, json_buf, send_json_len, 0);\n\n      frame_pool->free_frame(frame);\n    }\n  }\n}\nint main()\n{\n  // ZMQ\n  int ret;\n  void *context = zmq_ctx_new(); \n  sock_pull = zmq_socket(context, ZMQ_PULL);\n  ret = zmq_bind(sock_pull, \"tcp://*:5575\");\n  assert(ret != -1);\n\n  sock_push = zmq_socket(context, ZMQ_PUSH);\n  ret = zmq_bind(sock_push, \"ipc://unprocessed\");\n  assert(ret != -1);\n\n  // frame_pool\n  frame_pool = new Frame_pool();\n\n  // Thread\n  pthread_t recv_thread;\n  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_t send_thread;\n  if (pthread_create(&send_thread, 0, send_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_detach(send_thread);\n  pthread_detach(recv_thread);\n\n  while(!exit_flag);\n\n  delete frame_pool;\n  zmq_close(sock_pull);\n  zmq_close(sock_push);\n  zmq_ctx_destroy(context);\n}\n"
  },
  {
    "path": "server/src/worker.cpp",
    "content": "#include <zmq.h>\n#include <iostream>\n#include <sstream>\n#include <string>\n#include <vector>\n#include <memory>\n#include <chrono>\n#include <csignal>\n#include <assert.h>\n#include <pthread.h>\n#include <boost/archive/text_oarchive.hpp>\n#include \"share_queue.h\"\n#include \"frame.hpp\"\n#include \"args.hpp\"\n#include \"pose_detector.hpp\"\n// opencv\n#include <opencv2/opencv.hpp>\t\t\t// C++\n\n// ZMQ\nvoid *sock_pull;\nvoid *sock_push;\n\n// ShareQueue\nSharedQueue<Frame> unprocessed_frame_queue;\nSharedQueue<Frame> processed_frame_queue;;\n\n// pool\nFrame_pool *frame_pool;\n\n// signal\nvolatile bool exit_flag = false;\nvoid sig_handler(int s)\n{\n  exit_flag = true;\n}\n\nvoid *recv_in_thread(void *ptr)\n{\n  int recv_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n\n  while(!exit_flag) {\n    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);\n    \n    if (recv_json_len > 0) {\n      frame = frame_pool->alloc_frame();\n      json_buf[recv_json_len] = '\\0';\n      json_to_frame(json_buf, frame);\n\n#ifdef DEBUG\n      std::cout << \"Worker | Recv From Ventilator | SEQ : \" << frame.seq_buf \n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n      unprocessed_frame_queue.push_back(frame);\n    }\n  }\n}\n\nvoid *send_in_thread(void *ptr)\n{\n  int send_json_len;\n  unsigned char json_buf[JSON_BUF_LEN];\n  Frame frame;\n\n  while(!exit_flag) {\n    if (processed_frame_queue.size() > 0) {\n      frame = processed_frame_queue.front();\n      processed_frame_queue.pop_front();\n\n#ifdef DEBUG\n      std::cout << \"Worker | Send To Sink | SEQ : \" << frame.seq_buf\n        << \" LEN : \" << frame.msg_len << std::endl;\n#endif\n      send_json_len = frame_to_json(json_buf, frame);\n      zmq_send(sock_push, json_buf, send_json_len, 0);\n\n      frame_pool->free_frame(frame);\n    }\n  }\n}\n\nint main(int argc, char *argv[])\n{\n\tif (argc < 2) {\n    fprintf(stderr, \"usage: %s <cfg> <weights> [-gpu GPU_ID] [-thresh THRESH]\\n\", argv[0]);\n    return 0;\n  }\n\n  const char *cfg_path = argv[1];\n  const char *weights_path = argv[2];\n  int gpu_id = find_int_arg(argc, argv, \"-gpu\", 0);\n  float thresh = find_float_arg(argc, argv, \"-thresh\", 0.2);\n  fprintf(stdout, \"cfg : %s, weights : %s, gpu-id : %d, thresh : %f\\n\", \n      cfg_path, weights_path, gpu_id, thresh);\n\n  // opencv\n  std::vector<int> param = {cv::IMWRITE_JPEG_QUALITY, 60 };\n\n  // ZMQ\n  int ret;\n\n  void *context = zmq_ctx_new();\n\n  sock_pull = zmq_socket(context, ZMQ_PULL);\n  ret = zmq_connect(sock_pull, \"ipc://unprocessed\");\n  assert(ret != -1);\n\n  sock_push = zmq_socket(context, ZMQ_PUSH);\n  ret = zmq_connect(sock_push, \"ipc://processed\");\n  assert(ret != -1);\n\n  // frame_pool\n  frame_pool = new Frame_pool(5000);\n\n  // Thread\n  pthread_t recv_thread;\n  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_t send_thread;\n  if (pthread_create(&send_thread, 0, send_in_thread, 0))\n    std::cerr << \"Thread creation failed (recv_thread)\" << std::endl;\n\n  pthread_detach(send_thread);\n  pthread_detach(recv_thread);\n\n  // darkent openpose detector\n  PoseDetector detector(cfg_path, weights_path, gpu_id);\n  People* det_people = nullptr;\n  std::stringstream ss;\n\n  // frame\n  Frame frame;\n  int frame_len;\n  unsigned char *frame_buf_ptr;\n\n  // time\n  auto time_begin = std::chrono::steady_clock::now();\n  auto time_end = std::chrono::steady_clock::now();\n  double det_time;\n\n  while(!exit_flag) {\n    // recv from ventilator\n    if (unprocessed_frame_queue.size() > 0) {\n      frame = unprocessed_frame_queue.front();\n      unprocessed_frame_queue.pop_front();\n\n      frame_len = frame.msg_len;\n      frame_buf_ptr = frame.msg_buf;\n\n      // unsigned char array -> vector\n      std::vector<unsigned char> raw_vec(frame_buf_ptr, frame_buf_ptr + frame_len);\n\n      // vector -> mat\n      cv::Mat raw_mat = cv::imdecode(cv::Mat(raw_vec), 1);\n      \n      // detect pose\n      time_begin = std::chrono::steady_clock::now();\n      detector.detect(raw_mat, thresh);\n      det_people = detector.get_people();\n      time_end = std::chrono::steady_clock::now();\n      det_time = std::chrono::duration <double, std::milli> (time_end - time_begin).count();\n#ifdef DEBUG\n      std::cout << \"Darknet | Detect | SEQ : \" << frame.seq_buf << \" Time : \" << det_time << \"ms\" << std::endl;\n#endif \n      // det_people & serialize\n      ss.str(\"\"); // ss clear\n      {\n        boost::archive::text_oarchive oa(ss);\n        oa << *det_people;\n      }\n      std::string ser_people = ss.str();\n\n      frame.det_len = ser_people.size();\n      memcpy(frame.det_buf, ser_people.c_str(), frame.det_len);\n      frame.det_buf[frame.det_len] = '\\0';\n\n      // mat -> vector\n      std::vector<unsigned char> res_vec;\n      cv::imencode(\".jpg\", raw_mat, res_vec, param);\n\n      // vector -> frame array\n      frame.msg_len = res_vec.size();\n      std::copy(res_vec.begin(), res_vec.end(), frame.msg_buf);\n\n      // push to processed frame_queue\n      processed_frame_queue.push_back(frame);\n    }\n  }\n\n\tdelete frame_pool;\n  zmq_close(sock_pull);\n  zmq_close(sock_push);\n  zmq_ctx_destroy(context);\n\n  return 0;\n}\n"
  },
  {
    "path": "server/src/yolo_v2_class.hpp",
    "content": "#ifndef YOLO_V2_CLASS_HPP\n#define YOLO_V2_CLASS_HPP\n\n#ifndef LIB_API\n#ifdef LIB_EXPORTS\n#if defined(_MSC_VER)\n#define LIB_API __declspec(dllexport)\n#else\n#define LIB_API __attribute__((visibility(\"default\")))\n#endif\n#else\n#if defined(_MSC_VER)\n#define LIB_API\n#else\n#define LIB_API\n#endif\n#endif\n#endif\n\n#define C_SHARP_MAX_OBJECTS 1000\n\nstruct bbox_t {\n    unsigned int x, y, w, h;       // (x,y) - top-left corner, (w, h) - width & height of bounded box\n    float prob;                    // confidence - probability that the object was found correctly\n    unsigned int obj_id;           // class of object - from range [0, classes-1]\n    unsigned int track_id;         // tracking id for video (0 - untracked, 1 - inf - tracked object)\n    unsigned int frames_counter;   // counter of frames on which the object was detected\n    float x_3d, y_3d, z_3d;        // center of object (in Meters) if ZED 3D Camera is used\n};\n\nstruct image_t {\n    int h;                        // height\n    int w;                        // width\n    int c;                        // number of chanels (3 - for RGB)\n    float *data;                  // pointer to the image data\n};\n\nstruct bbox_t_container {\n    bbox_t candidates[C_SHARP_MAX_OBJECTS];\n};\n\n#ifdef __cplusplus\n#include <memory>\n#include <vector>\n#include <deque>\n#include <algorithm>\n#include <chrono>\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <cmath>\n\n#ifdef OPENCV\n#include <opencv2/opencv.hpp>            // C++\n#include <opencv2/highgui/highgui_c.h>   // C\n#include <opencv2/imgproc/imgproc_c.h>   // C\n#endif\n\nextern \"C\" LIB_API int init(const char *configurationFilename, const char *weightsFilename, int gpu);\nextern \"C\" LIB_API int detect_image(const char *filename, bbox_t_container &container);\nextern \"C\" LIB_API int detect_mat(const uint8_t* data, const size_t data_length, bbox_t_container &container);\nextern \"C\" LIB_API int dispose();\nextern \"C\" LIB_API int get_device_count();\nextern \"C\" LIB_API int get_device_name(int gpu, char* deviceName);\nextern \"C\" LIB_API bool built_with_cuda();\nextern \"C\" LIB_API bool built_with_cudnn();\nextern \"C\" LIB_API bool built_with_opencv();\nextern \"C\" LIB_API void send_json_custom(char const* send_buf, int port, int timeout);\n\nclass Detector {\n    std::shared_ptr<void> detector_gpu_ptr;\n    std::deque<std::vector<bbox_t>> prev_bbox_vec_deque;\npublic:\n    const int cur_gpu_id;\n    float nms = .4;\n    bool wait_stream;\n\n    LIB_API Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);\n    LIB_API ~Detector();\n\n    LIB_API std::vector<bbox_t> detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);\n    LIB_API std::vector<bbox_t> detect(image_t img, float thresh = 0.2, bool use_mean = false);\n    static LIB_API image_t load_image(std::string image_filename);\n    static LIB_API void free_image(image_t m);\n    LIB_API int get_net_width() const;\n    LIB_API int get_net_height() const;\n    LIB_API int get_net_out_width() const;\n    LIB_API int get_net_out_height() const;\n    LIB_API int get_net_color_depth() const;\n    LIB_API float *predict(float *input) const;\n\n    LIB_API std::vector<bbox_t> tracking_id(std::vector<bbox_t> cur_bbox_vec, bool const change_history = true,\n                                                int const frames_story = 5, int const max_dist = 40);\n\n    LIB_API void *get_cuda_context();\n\n    //LIB_API bool send_json_http(std::vector<bbox_t> cur_bbox_vec, std::vector<std::string> obj_names, int frame_id,\n    //    std::string filename = std::string(), int timeout = 400000, int port = 8070);\n\n    std::vector<bbox_t> detect_resized(image_t img, int init_w, int init_h, float thresh = 0.2, bool use_mean = false)\n    {\n        if (img.data == NULL)\n            throw std::runtime_error(\"Image is empty\");\n        auto detection_boxes = detect(img, thresh, use_mean);\n        float wk = (float)init_w / img.w, hk = (float)init_h / img.h;\n        for (auto &i : detection_boxes) i.x *= wk, i.w *= wk, i.y *= hk, i.h *= hk;\n        return detection_boxes;\n    }\n\n#ifdef OPENCV\n    std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false)\n    {\n        if(mat.data == NULL)\n            throw std::runtime_error(\"Image is empty\");\n        auto image_ptr = mat_to_image_resize(mat);\n        return detect_resized(*image_ptr, mat.cols, mat.rows, thresh, use_mean);\n    }\n\n    std::shared_ptr<image_t> mat_to_image_resize(cv::Mat mat) const\n    {\n        if (mat.data == NULL) return std::shared_ptr<image_t>(NULL);\n\n        cv::Size network_size = cv::Size(get_net_width(), get_net_height());\n        cv::Mat det_mat;\n        if (mat.size() != network_size)\n            cv::resize(mat, det_mat, network_size);\n        else\n            det_mat = mat;  // only reference is copied\n\n        return mat_to_image(det_mat);\n    }\n\n    static std::shared_ptr<image_t> mat_to_image(cv::Mat img_src)\n    {\n        cv::Mat img;\n        if (img_src.channels() == 4) cv::cvtColor(img_src, img, cv::COLOR_RGBA2BGR);\n        else if (img_src.channels() == 3) cv::cvtColor(img_src, img, cv::COLOR_RGB2BGR);\n        else if (img_src.channels() == 1) cv::cvtColor(img_src, img, cv::COLOR_GRAY2BGR);\n        else std::cerr << \" Warning: img_src.channels() is not 1, 3 or 4. It is = \" << img_src.channels() << std::endl;\n        std::shared_ptr<image_t> image_ptr(new image_t, [](image_t *img) { free_image(*img); delete img; });\n        *image_ptr = mat_to_image_custom(img);\n        return image_ptr;\n    }\n\nprivate:\n\n    static image_t mat_to_image_custom(cv::Mat mat)\n    {\n        int w = mat.cols;\n        int h = mat.rows;\n        int c = mat.channels();\n        image_t im = make_image_custom(w, h, c);\n        unsigned char *data = (unsigned char *)mat.data;\n        int step = mat.step;\n        for (int y = 0; y < h; ++y) {\n            for (int k = 0; k < c; ++k) {\n                for (int x = 0; x < w; ++x) {\n                    im.data[k*w*h + y*w + x] = data[y*step + x*c + k] / 255.0f;\n                }\n            }\n        }\n        return im;\n    }\n\n    static image_t make_empty_image(int w, int h, int c)\n    {\n        image_t out;\n        out.data = 0;\n        out.h = h;\n        out.w = w;\n        out.c = c;\n        return out;\n    }\n\n    static image_t make_image_custom(int w, int h, int c)\n    {\n        image_t out = make_empty_image(w, h, c);\n        out.data = (float *)calloc(h*w*c, sizeof(float));\n        return out;\n    }\n\n#endif    // OPENCV\n\npublic:\n\n    bool send_json_http(std::vector<bbox_t> cur_bbox_vec, std::vector<std::string> obj_names, int frame_id,\n        std::string filename = std::string(), int timeout = 400000, int port = 8070)\n    {\n        std::string send_str;\n\n        char *tmp_buf = (char *)calloc(1024, sizeof(char));\n        if (!filename.empty()) {\n            sprintf(tmp_buf, \"{\\n \\\"frame_id\\\":%d, \\n \\\"filename\\\":\\\"%s\\\", \\n \\\"objects\\\": [ \\n\", frame_id, filename.c_str());\n        }\n        else {\n            sprintf(tmp_buf, \"{\\n \\\"frame_id\\\":%d, \\n \\\"objects\\\": [ \\n\", frame_id);\n        }\n        send_str = tmp_buf;\n        free(tmp_buf);\n\n        for (auto & i : cur_bbox_vec) {\n            char *buf = (char *)calloc(2048, sizeof(char));\n\n            sprintf(buf, \"  {\\\"class_id\\\":%d, \\\"name\\\":\\\"%s\\\", \\\"absolute_coordinates\\\":{\\\"center_x\\\":%d, \\\"center_y\\\":%d, \\\"width\\\":%d, \\\"height\\\":%d}, \\\"confidence\\\":%f\",\n                i.obj_id, obj_names[i.obj_id].c_str(), i.x, i.y, i.w, i.h, i.prob);\n\n            //sprintf(buf, \"  {\\\"class_id\\\":%d, \\\"name\\\":\\\"%s\\\", \\\"relative_coordinates\\\":{\\\"center_x\\\":%f, \\\"center_y\\\":%f, \\\"width\\\":%f, \\\"height\\\":%f}, \\\"confidence\\\":%f\",\n            //    i.obj_id, obj_names[i.obj_id], i.x, i.y, i.w, i.h, i.prob);\n\n            send_str += buf;\n\n            if (!std::isnan(i.z_3d)) {\n                sprintf(buf, \"\\n    , \\\"coordinates_in_meters\\\":{\\\"x_3d\\\":%.2f, \\\"y_3d\\\":%.2f, \\\"z_3d\\\":%.2f}\",\n                    i.x_3d, i.y_3d, i.z_3d);\n                send_str += buf;\n            }\n\n            send_str += \"}\\n\";\n\n            free(buf);\n        }\n\n        //send_str +=  \"\\n ] \\n}, \\n\";\n        send_str += \"\\n ] \\n}\";\n\n        send_json_custom(send_str.c_str(), port, timeout);\n        return true;\n    }\n};\n// --------------------------------------------------------------------------------\n\n\n#if defined(TRACK_OPTFLOW) && defined(OPENCV) && defined(GPU)\n\n#include <opencv2/cudaoptflow.hpp>\n#include <opencv2/cudaimgproc.hpp>\n#include <opencv2/cudaarithm.hpp>\n#include <opencv2/core/cuda.hpp>\n\nclass Tracker_optflow {\npublic:\n    const int gpu_count;\n    const int gpu_id;\n    const int flow_error;\n\n\n    Tracker_optflow(int _gpu_id = 0, int win_size = 15, int max_level = 3, int iterations = 8000, int _flow_error = -1) :\n        gpu_count(cv::cuda::getCudaEnabledDeviceCount()), gpu_id(std::min(_gpu_id, gpu_count-1)),\n        flow_error((_flow_error > 0)? _flow_error:(win_size*4))\n    {\n        int const old_gpu_id = cv::cuda::getDevice();\n        cv::cuda::setDevice(gpu_id);\n\n        stream = cv::cuda::Stream();\n\n        sync_PyrLKOpticalFlow_gpu = cv::cuda::SparsePyrLKOpticalFlow::create();\n        sync_PyrLKOpticalFlow_gpu->setWinSize(cv::Size(win_size, win_size));    // 9, 15, 21, 31\n        sync_PyrLKOpticalFlow_gpu->setMaxLevel(max_level);        // +- 3 pt\n        sync_PyrLKOpticalFlow_gpu->setNumIters(iterations);    // 2000, def: 30\n\n        cv::cuda::setDevice(old_gpu_id);\n    }\n\n    // just to avoid extra allocations\n    cv::cuda::GpuMat src_mat_gpu;\n    cv::cuda::GpuMat dst_mat_gpu, dst_grey_gpu;\n    cv::cuda::GpuMat prev_pts_flow_gpu, cur_pts_flow_gpu;\n    cv::cuda::GpuMat status_gpu, err_gpu;\n\n    cv::cuda::GpuMat src_grey_gpu;    // used in both functions\n    cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> sync_PyrLKOpticalFlow_gpu;\n    cv::cuda::Stream stream;\n\n    std::vector<bbox_t> cur_bbox_vec;\n    std::vector<bool> good_bbox_vec_flags;\n    cv::Mat prev_pts_flow_cpu;\n\n    void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)\n    {\n        cur_bbox_vec = _cur_bbox_vec;\n        good_bbox_vec_flags = std::vector<bool>(cur_bbox_vec.size(), true);\n        cv::Mat prev_pts, cur_pts_flow_cpu;\n\n        for (auto &i : cur_bbox_vec) {\n            float x_center = (i.x + i.w / 2.0F);\n            float y_center = (i.y + i.h / 2.0F);\n            prev_pts.push_back(cv::Point2f(x_center, y_center));\n        }\n\n        if (prev_pts.rows == 0)\n            prev_pts_flow_cpu = cv::Mat();\n        else\n            cv::transpose(prev_pts, prev_pts_flow_cpu);\n\n        if (prev_pts_flow_gpu.cols < prev_pts_flow_cpu.cols) {\n            prev_pts_flow_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), prev_pts_flow_cpu.type());\n            cur_pts_flow_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), prev_pts_flow_cpu.type());\n\n            status_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), CV_8UC1);\n            err_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), CV_32FC1);\n        }\n\n        prev_pts_flow_gpu.upload(cv::Mat(prev_pts_flow_cpu), stream);\n    }\n\n\n    void update_tracking_flow(cv::Mat src_mat, std::vector<bbox_t> _cur_bbox_vec)\n    {\n        int const old_gpu_id = cv::cuda::getDevice();\n        if (old_gpu_id != gpu_id)\n            cv::cuda::setDevice(gpu_id);\n\n        if (src_mat.channels() == 1 || src_mat.channels() == 3 || src_mat.channels() == 4) {\n            if (src_mat_gpu.cols == 0) {\n                src_mat_gpu = cv::cuda::GpuMat(src_mat.size(), src_mat.type());\n                src_grey_gpu = cv::cuda::GpuMat(src_mat.size(), CV_8UC1);\n            }\n\n            if (src_mat.channels() == 1) {\n                src_mat_gpu.upload(src_mat, stream);\n                src_mat_gpu.copyTo(src_grey_gpu);\n            }\n            else if (src_mat.channels() == 3) {\n                src_mat_gpu.upload(src_mat, stream);\n                cv::cuda::cvtColor(src_mat_gpu, src_grey_gpu, CV_BGR2GRAY, 1, stream);\n            }\n            else if (src_mat.channels() == 4) {\n                src_mat_gpu.upload(src_mat, stream);\n                cv::cuda::cvtColor(src_mat_gpu, src_grey_gpu, CV_BGRA2GRAY, 1, stream);\n            }\n            else {\n                std::cerr << \" Warning: src_mat.channels() is not: 1, 3 or 4. It is = \" << src_mat.channels() << \" \\n\";\n                return;\n            }\n\n        }\n        update_cur_bbox_vec(_cur_bbox_vec);\n\n        if (old_gpu_id != gpu_id)\n            cv::cuda::setDevice(old_gpu_id);\n    }\n\n\n    std::vector<bbox_t> tracking_flow(cv::Mat dst_mat, bool check_error = true)\n    {\n        if (sync_PyrLKOpticalFlow_gpu.empty()) {\n            std::cout << \"sync_PyrLKOpticalFlow_gpu isn't initialized \\n\";\n            return cur_bbox_vec;\n        }\n\n        int const old_gpu_id = cv::cuda::getDevice();\n        if(old_gpu_id != gpu_id)\n            cv::cuda::setDevice(gpu_id);\n\n        if (dst_mat_gpu.cols == 0) {\n            dst_mat_gpu = cv::cuda::GpuMat(dst_mat.size(), dst_mat.type());\n            dst_grey_gpu = cv::cuda::GpuMat(dst_mat.size(), CV_8UC1);\n        }\n\n        //dst_grey_gpu.upload(dst_mat, stream);    // use BGR\n        dst_mat_gpu.upload(dst_mat, stream);\n        cv::cuda::cvtColor(dst_mat_gpu, dst_grey_gpu, CV_BGR2GRAY, 1, stream);\n\n        if (src_grey_gpu.rows != dst_grey_gpu.rows || src_grey_gpu.cols != dst_grey_gpu.cols) {\n            stream.waitForCompletion();\n            src_grey_gpu = dst_grey_gpu.clone();\n            cv::cuda::setDevice(old_gpu_id);\n            return cur_bbox_vec;\n        }\n\n        ////sync_PyrLKOpticalFlow_gpu.sparse(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, &err_gpu);    // OpenCV 2.4.x\n        sync_PyrLKOpticalFlow_gpu->calc(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, err_gpu, stream);    // OpenCV 3.x\n\n        cv::Mat cur_pts_flow_cpu;\n        cur_pts_flow_gpu.download(cur_pts_flow_cpu, stream);\n\n        dst_grey_gpu.copyTo(src_grey_gpu, stream);\n\n        cv::Mat err_cpu, status_cpu;\n        err_gpu.download(err_cpu, stream);\n        status_gpu.download(status_cpu, stream);\n\n        stream.waitForCompletion();\n\n        std::vector<bbox_t> result_bbox_vec;\n\n        if (err_cpu.cols == cur_bbox_vec.size() && status_cpu.cols == cur_bbox_vec.size())\n        {\n            for (size_t i = 0; i < cur_bbox_vec.size(); ++i)\n            {\n                cv::Point2f cur_key_pt = cur_pts_flow_cpu.at<cv::Point2f>(0, i);\n                cv::Point2f prev_key_pt = prev_pts_flow_cpu.at<cv::Point2f>(0, i);\n\n                float moved_x = cur_key_pt.x - prev_key_pt.x;\n                float moved_y = cur_key_pt.y - prev_key_pt.y;\n\n                if (abs(moved_x) < 100 && abs(moved_y) < 100 && good_bbox_vec_flags[i])\n                    if (err_cpu.at<float>(0, i) < flow_error && status_cpu.at<unsigned char>(0, i) != 0 &&\n                        ((float)cur_bbox_vec[i].x + moved_x) > 0 && ((float)cur_bbox_vec[i].y + moved_y) > 0)\n                    {\n                        cur_bbox_vec[i].x += moved_x + 0.5;\n                        cur_bbox_vec[i].y += moved_y + 0.5;\n                        result_bbox_vec.push_back(cur_bbox_vec[i]);\n                    }\n                    else good_bbox_vec_flags[i] = false;\n                else good_bbox_vec_flags[i] = false;\n\n                //if(!check_error && !good_bbox_vec_flags[i]) result_bbox_vec.push_back(cur_bbox_vec[i]);\n            }\n        }\n\n        cur_pts_flow_gpu.swap(prev_pts_flow_gpu);\n        cur_pts_flow_cpu.copyTo(prev_pts_flow_cpu);\n\n        if (old_gpu_id != gpu_id)\n            cv::cuda::setDevice(old_gpu_id);\n\n        return result_bbox_vec;\n    }\n\n};\n\n#elif defined(TRACK_OPTFLOW) && defined(OPENCV)\n\n//#include <opencv2/optflow.hpp>\n#include <opencv2/video/tracking.hpp>\n\nclass Tracker_optflow {\npublic:\n    const int flow_error;\n\n\n    Tracker_optflow(int win_size = 15, int max_level = 3, int iterations = 8000, int _flow_error = -1) :\n        flow_error((_flow_error > 0)? _flow_error:(win_size*4))\n    {\n        sync_PyrLKOpticalFlow = cv::SparsePyrLKOpticalFlow::create();\n        sync_PyrLKOpticalFlow->setWinSize(cv::Size(win_size, win_size));    // 9, 15, 21, 31\n        sync_PyrLKOpticalFlow->setMaxLevel(max_level);        // +- 3 pt\n\n    }\n\n    // just to avoid extra allocations\n    cv::Mat dst_grey;\n    cv::Mat prev_pts_flow, cur_pts_flow;\n    cv::Mat status, err;\n\n    cv::Mat src_grey;    // used in both functions\n    cv::Ptr<cv::SparsePyrLKOpticalFlow> sync_PyrLKOpticalFlow;\n\n    std::vector<bbox_t> cur_bbox_vec;\n    std::vector<bool> good_bbox_vec_flags;\n\n    void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)\n    {\n        cur_bbox_vec = _cur_bbox_vec;\n        good_bbox_vec_flags = std::vector<bool>(cur_bbox_vec.size(), true);\n        cv::Mat prev_pts, cur_pts_flow;\n\n        for (auto &i : cur_bbox_vec) {\n            float x_center = (i.x + i.w / 2.0F);\n            float y_center = (i.y + i.h / 2.0F);\n            prev_pts.push_back(cv::Point2f(x_center, y_center));\n        }\n\n        if (prev_pts.rows == 0)\n            prev_pts_flow = cv::Mat();\n        else\n            cv::transpose(prev_pts, prev_pts_flow);\n    }\n\n\n    void update_tracking_flow(cv::Mat new_src_mat, std::vector<bbox_t> _cur_bbox_vec)\n    {\n        if (new_src_mat.channels() == 1) {\n            src_grey = new_src_mat.clone();\n        }\n        else if (new_src_mat.channels() == 3) {\n            cv::cvtColor(new_src_mat, src_grey, CV_BGR2GRAY, 1);\n        }\n        else if (new_src_mat.channels() == 4) {\n            cv::cvtColor(new_src_mat, src_grey, CV_BGRA2GRAY, 1);\n        }\n        else {\n            std::cerr << \" Warning: new_src_mat.channels() is not: 1, 3 or 4. It is = \" << new_src_mat.channels() << \" \\n\";\n            return;\n        }\n        update_cur_bbox_vec(_cur_bbox_vec);\n    }\n\n\n    std::vector<bbox_t> tracking_flow(cv::Mat new_dst_mat, bool check_error = true)\n    {\n        if (sync_PyrLKOpticalFlow.empty()) {\n            std::cout << \"sync_PyrLKOpticalFlow isn't initialized \\n\";\n            return cur_bbox_vec;\n        }\n\n        cv::cvtColor(new_dst_mat, dst_grey, CV_BGR2GRAY, 1);\n\n        if (src_grey.rows != dst_grey.rows || src_grey.cols != dst_grey.cols) {\n            src_grey = dst_grey.clone();\n            //std::cerr << \" Warning: src_grey.rows != dst_grey.rows || src_grey.cols != dst_grey.cols \\n\";\n            return cur_bbox_vec;\n        }\n\n        if (prev_pts_flow.cols < 1) {\n            return cur_bbox_vec;\n        }\n\n        ////sync_PyrLKOpticalFlow_gpu.sparse(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, &err_gpu);    // OpenCV 2.4.x\n        sync_PyrLKOpticalFlow->calc(src_grey, dst_grey, prev_pts_flow, cur_pts_flow, status, err);    // OpenCV 3.x\n\n        dst_grey.copyTo(src_grey);\n\n        std::vector<bbox_t> result_bbox_vec;\n\n        if (err.rows == cur_bbox_vec.size() && status.rows == cur_bbox_vec.size())\n        {\n            for (size_t i = 0; i < cur_bbox_vec.size(); ++i)\n            {\n                cv::Point2f cur_key_pt = cur_pts_flow.at<cv::Point2f>(0, i);\n                cv::Point2f prev_key_pt = prev_pts_flow.at<cv::Point2f>(0, i);\n\n                float moved_x = cur_key_pt.x - prev_key_pt.x;\n                float moved_y = cur_key_pt.y - prev_key_pt.y;\n\n                if (abs(moved_x) < 100 && abs(moved_y) < 100 && good_bbox_vec_flags[i])\n                    if (err.at<float>(0, i) < flow_error && status.at<unsigned char>(0, i) != 0 &&\n                        ((float)cur_bbox_vec[i].x + moved_x) > 0 && ((float)cur_bbox_vec[i].y + moved_y) > 0)\n                    {\n                        cur_bbox_vec[i].x += moved_x + 0.5;\n                        cur_bbox_vec[i].y += moved_y + 0.5;\n                        result_bbox_vec.push_back(cur_bbox_vec[i]);\n                    }\n                    else good_bbox_vec_flags[i] = false;\n                else good_bbox_vec_flags[i] = false;\n\n                //if(!check_error && !good_bbox_vec_flags[i]) result_bbox_vec.push_back(cur_bbox_vec[i]);\n            }\n        }\n\n        prev_pts_flow = cur_pts_flow.clone();\n\n        return result_bbox_vec;\n    }\n\n};\n#else\n\nclass Tracker_optflow {};\n\n#endif    // defined(TRACK_OPTFLOW) && defined(OPENCV)\n\n\n#ifdef OPENCV\n\nstatic cv::Scalar obj_id_to_color(int obj_id) {\n    int const colors[6][3] = { { 1,0,1 },{ 0,0,1 },{ 0,1,1 },{ 0,1,0 },{ 1,1,0 },{ 1,0,0 } };\n    int const offset = obj_id * 123457 % 6;\n    int const color_scale = 150 + (obj_id * 123457) % 100;\n    cv::Scalar color(colors[offset][0], colors[offset][1], colors[offset][2]);\n    color *= color_scale;\n    return color;\n}\n\nclass preview_boxes_t {\n    enum { frames_history = 30 };    // how long to keep the history saved\n\n    struct preview_box_track_t {\n        unsigned int track_id, obj_id, last_showed_frames_ago;\n        bool current_detection;\n        bbox_t bbox;\n        cv::Mat mat_obj, mat_resized_obj;\n        preview_box_track_t() : track_id(0), obj_id(0), last_showed_frames_ago(frames_history), current_detection(false) {}\n    };\n    std::vector<preview_box_track_t> preview_box_track_id;\n    size_t const preview_box_size, bottom_offset;\n    bool const one_off_detections;\npublic:\n    preview_boxes_t(size_t _preview_box_size = 100, size_t _bottom_offset = 100, bool _one_off_detections = false) :\n        preview_box_size(_preview_box_size), bottom_offset(_bottom_offset), one_off_detections(_one_off_detections)\n    {}\n\n    void set(cv::Mat src_mat, std::vector<bbox_t> result_vec)\n    {\n        size_t const count_preview_boxes = src_mat.cols / preview_box_size;\n        if (preview_box_track_id.size() != count_preview_boxes) preview_box_track_id.resize(count_preview_boxes);\n\n        // increment frames history\n        for (auto &i : preview_box_track_id)\n            i.last_showed_frames_ago = std::min((unsigned)frames_history, i.last_showed_frames_ago + 1);\n\n        // occupy empty boxes\n        for (auto &k : result_vec) {\n            bool found = false;\n            // find the same (track_id)\n            for (auto &i : preview_box_track_id) {\n                if (i.track_id == k.track_id) {\n                    if (!one_off_detections) i.last_showed_frames_ago = 0; // for tracked objects\n                    found = true;\n                    break;\n                }\n            }\n            if (!found) {\n                // find empty box\n                for (auto &i : preview_box_track_id) {\n                    if (i.last_showed_frames_ago == frames_history) {\n                        if (!one_off_detections && k.frames_counter == 0) break; // don't show if obj isn't tracked yet\n                        i.track_id = k.track_id;\n                        i.obj_id = k.obj_id;\n                        i.bbox = k;\n                        i.last_showed_frames_ago = 0;\n                        break;\n                    }\n                }\n            }\n        }\n\n        // draw preview box (from old or current frame)\n        for (size_t i = 0; i < preview_box_track_id.size(); ++i)\n        {\n            // get object image\n            cv::Mat dst = preview_box_track_id[i].mat_resized_obj;\n            preview_box_track_id[i].current_detection = false;\n\n            for (auto &k : result_vec) {\n                if (preview_box_track_id[i].track_id == k.track_id) {\n                    if (one_off_detections && preview_box_track_id[i].last_showed_frames_ago > 0) {\n                        preview_box_track_id[i].last_showed_frames_ago = frames_history; break;\n                    }\n                    bbox_t b = k;\n                    cv::Rect r(b.x, b.y, b.w, b.h);\n                    cv::Rect img_rect(cv::Point2i(0, 0), src_mat.size());\n                    cv::Rect rect_roi = r & img_rect;\n                    if (rect_roi.width > 1 || rect_roi.height > 1) {\n                        cv::Mat roi = src_mat(rect_roi);\n                        cv::resize(roi, dst, cv::Size(preview_box_size, preview_box_size), cv::INTER_NEAREST);\n                        preview_box_track_id[i].mat_obj = roi.clone();\n                        preview_box_track_id[i].mat_resized_obj = dst.clone();\n                        preview_box_track_id[i].current_detection = true;\n                        preview_box_track_id[i].bbox = k;\n                    }\n                    break;\n                }\n            }\n        }\n    }\n\n\n    void draw(cv::Mat draw_mat, bool show_small_boxes = false)\n    {\n        // draw preview box (from old or current frame)\n        for (size_t i = 0; i < preview_box_track_id.size(); ++i)\n        {\n            auto &prev_box = preview_box_track_id[i];\n\n            // draw object image\n            cv::Mat dst = prev_box.mat_resized_obj;\n            if (prev_box.last_showed_frames_ago < frames_history &&\n                dst.size() == cv::Size(preview_box_size, preview_box_size))\n            {\n                cv::Rect dst_rect_roi(cv::Point2i(i * preview_box_size, draw_mat.rows - bottom_offset), dst.size());\n                cv::Mat dst_roi = draw_mat(dst_rect_roi);\n                dst.copyTo(dst_roi);\n\n                cv::Scalar color = obj_id_to_color(prev_box.obj_id);\n                int thickness = (prev_box.current_detection) ? 5 : 1;\n                cv::rectangle(draw_mat, dst_rect_roi, color, thickness);\n\n                unsigned int const track_id = prev_box.track_id;\n                std::string track_id_str = (track_id > 0) ? std::to_string(track_id) : \"\";\n                putText(draw_mat, track_id_str, dst_rect_roi.tl() - cv::Point2i(-4, 5), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.9, cv::Scalar(0, 0, 0), 2);\n\n                std::string size_str = std::to_string(prev_box.bbox.w) + \"x\" + std::to_string(prev_box.bbox.h);\n                putText(draw_mat, size_str, dst_rect_roi.tl() + cv::Point2i(0, 12), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(0, 0, 0), 1);\n\n                if (!one_off_detections && prev_box.current_detection) {\n                    cv::line(draw_mat, dst_rect_roi.tl() + cv::Point2i(preview_box_size, 0),\n                        cv::Point2i(prev_box.bbox.x, prev_box.bbox.y + prev_box.bbox.h),\n                        color);\n                }\n\n                if (one_off_detections && show_small_boxes) {\n                    cv::Rect src_rect_roi(cv::Point2i(prev_box.bbox.x, prev_box.bbox.y),\n                        cv::Size(prev_box.bbox.w, prev_box.bbox.h));\n                    unsigned int const color_history = (255 * prev_box.last_showed_frames_ago) / frames_history;\n                    color = cv::Scalar(255 - 3 * color_history, 255 - 2 * color_history, 255 - 1 * color_history);\n                    if (prev_box.mat_obj.size() == src_rect_roi.size()) {\n                        prev_box.mat_obj.copyTo(draw_mat(src_rect_roi));\n                    }\n                    cv::rectangle(draw_mat, src_rect_roi, color, thickness);\n                    putText(draw_mat, track_id_str, src_rect_roi.tl() - cv::Point2i(0, 10), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(0, 0, 0), 1);\n                }\n            }\n        }\n    }\n};\n\n\nclass track_kalman_t\n{\n    int track_id_counter;\n    std::chrono::steady_clock::time_point global_last_time;\n    float dT;\n\npublic:\n    int max_objects;    // max objects for tracking\n    int min_frames;     // min frames to consider an object as detected\n    const float max_dist;   // max distance (in px) to track with the same ID\n    cv::Size img_size;  // max value of x,y,w,h\n\n    struct tst_t {\n        int track_id;\n        int state_id;\n        std::chrono::steady_clock::time_point last_time;\n        int detection_count;\n        tst_t() : track_id(-1), state_id(-1) {}\n    };\n    std::vector<tst_t> track_id_state_id_time;\n    std::vector<bbox_t> result_vec_pred;\n\n    struct one_kalman_t;\n    std::vector<one_kalman_t> kalman_vec;\n\n    struct one_kalman_t\n    {\n        cv::KalmanFilter kf;\n        cv::Mat state;\n        cv::Mat meas;\n        int measSize, stateSize, contrSize;\n\n        void set_delta_time(float dT) {\n            kf.transitionMatrix.at<float>(2) = dT;\n            kf.transitionMatrix.at<float>(9) = dT;\n        }\n\n        void set(bbox_t box)\n        {\n            initialize_kalman();\n\n            kf.errorCovPre.at<float>(0) = 1; // px\n            kf.errorCovPre.at<float>(7) = 1; // px\n            kf.errorCovPre.at<float>(14) = 1;\n            kf.errorCovPre.at<float>(21) = 1;\n            kf.errorCovPre.at<float>(28) = 1; // px\n            kf.errorCovPre.at<float>(35) = 1; // px\n\n            state.at<float>(0) = box.x;\n            state.at<float>(1) = box.y;\n            state.at<float>(2) = 0;\n            state.at<float>(3) = 0;\n            state.at<float>(4) = box.w;\n            state.at<float>(5) = box.h;\n            // <<<< Initialization\n\n            kf.statePost = state;\n        }\n\n        // Kalman.correct() calculates: statePost = statePre + gain * (z(k)-measurementMatrix*statePre);\n        // corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k))\n        void correct(bbox_t box) {\n            meas.at<float>(0) = box.x;\n            meas.at<float>(1) = box.y;\n            meas.at<float>(2) = box.w;\n            meas.at<float>(3) = box.h;\n\n            kf.correct(meas);\n\n            bbox_t new_box = predict();\n            if (new_box.w == 0 || new_box.h == 0) {\n                set(box);\n                //std::cerr << \" force set(): track_id = \" << box.track_id <<\n                //    \", x = \" << box.x << \", y = \" << box.y << \", w = \" << box.w << \", h = \" << box.h << std::endl;\n            }\n        }\n\n        // Kalman.predict() calculates: statePre = TransitionMatrix * statePost;\n        // predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k)\n        bbox_t predict() {\n            bbox_t box;\n            state = kf.predict();\n\n            box.x = state.at<float>(0);\n            box.y = state.at<float>(1);\n            box.w = state.at<float>(4);\n            box.h = state.at<float>(5);\n            return box;\n        }\n\n        void initialize_kalman()\n        {\n            kf = cv::KalmanFilter(stateSize, measSize, contrSize, CV_32F);\n\n            // Transition State Matrix A\n            // Note: set dT at each processing step!\n            // [ 1 0 dT 0  0 0 ]\n            // [ 0 1 0  dT 0 0 ]\n            // [ 0 0 1  0  0 0 ]\n            // [ 0 0 0  1  0 0 ]\n            // [ 0 0 0  0  1 0 ]\n            // [ 0 0 0  0  0 1 ]\n            cv::setIdentity(kf.transitionMatrix);\n\n            // Measure Matrix H\n            // [ 1 0 0 0 0 0 ]\n            // [ 0 1 0 0 0 0 ]\n            // [ 0 0 0 0 1 0 ]\n            // [ 0 0 0 0 0 1 ]\n            kf.measurementMatrix = cv::Mat::zeros(measSize, stateSize, CV_32F);\n            kf.measurementMatrix.at<float>(0) = 1.0f;\n            kf.measurementMatrix.at<float>(7) = 1.0f;\n            kf.measurementMatrix.at<float>(16) = 1.0f;\n            kf.measurementMatrix.at<float>(23) = 1.0f;\n\n            // Process Noise Covariance Matrix Q - result smoother with lower values (1e-2)\n            // [ Ex   0   0     0     0    0  ]\n            // [ 0    Ey  0     0     0    0  ]\n            // [ 0    0   Ev_x  0     0    0  ]\n            // [ 0    0   0     Ev_y  0    0  ]\n            // [ 0    0   0     0     Ew   0  ]\n            // [ 0    0   0     0     0    Eh ]\n            //cv::setIdentity(kf.processNoiseCov, cv::Scalar(1e-3));\n            kf.processNoiseCov.at<float>(0) = 1e-2;\n            kf.processNoiseCov.at<float>(7) = 1e-2;\n            kf.processNoiseCov.at<float>(14) = 1e-2;// 5.0f;\n            kf.processNoiseCov.at<float>(21) = 1e-2;// 5.0f;\n            kf.processNoiseCov.at<float>(28) = 5e-3;\n            kf.processNoiseCov.at<float>(35) = 5e-3;\n\n            // Measures Noise Covariance Matrix R - result smoother with higher values (1e-1)\n            cv::setIdentity(kf.measurementNoiseCov, cv::Scalar(1e-1));\n\n            //cv::setIdentity(kf.errorCovPost, cv::Scalar::all(1e-2));\n            // <<<< Kalman Filter\n\n            set_delta_time(0);\n        }\n\n\n        one_kalman_t(int _stateSize = 6, int _measSize = 4, int _contrSize = 0) :\n            kf(_stateSize, _measSize, _contrSize, CV_32F), measSize(_measSize), stateSize(_stateSize), contrSize(_contrSize)\n        {\n            state = cv::Mat(stateSize, 1, CV_32F);  // [x,y,v_x,v_y,w,h]\n            meas = cv::Mat(measSize, 1, CV_32F);    // [z_x,z_y,z_w,z_h]\n            //cv::Mat procNoise(stateSize, 1, type)\n            // [E_x,E_y,E_v_x,E_v_y,E_w,E_h]\n\n            initialize_kalman();\n        }\n    };\n    // ------------------------------------------\n\n\n\n    track_kalman_t(int _max_objects = 1000, int _min_frames = 3, float _max_dist = 40, cv::Size _img_size = cv::Size(10000, 10000)) :\n        max_objects(_max_objects), min_frames(_min_frames), max_dist(_max_dist), img_size(_img_size),\n        track_id_counter(0)\n    {\n        kalman_vec.resize(max_objects);\n        track_id_state_id_time.resize(max_objects);\n        result_vec_pred.resize(max_objects);\n    }\n\n    float calc_dt() {\n        dT = std::chrono::duration<double>(std::chrono::steady_clock::now() - global_last_time).count();\n        return dT;\n    }\n\n    static float get_distance(float src_x, float src_y, float dst_x, float dst_y) {\n        return sqrtf((src_x - dst_x)*(src_x - dst_x) + (src_y - dst_y)*(src_y - dst_y));\n    }\n\n    void clear_old_states() {\n        // clear old bboxes\n        for (size_t state_id = 0; state_id < track_id_state_id_time.size(); ++state_id)\n        {\n            float time_sec = std::chrono::duration<double>(std::chrono::steady_clock::now() - track_id_state_id_time[state_id].last_time).count();\n            float time_wait = 0.5;    // 0.5 second\n            if (track_id_state_id_time[state_id].track_id > -1)\n            {\n                if ((result_vec_pred[state_id].x > img_size.width) ||\n                    (result_vec_pred[state_id].y > img_size.height))\n                {\n                    track_id_state_id_time[state_id].track_id = -1;\n                }\n\n                if (time_sec >= time_wait || track_id_state_id_time[state_id].detection_count < 0) {\n                    //std::cerr << \" remove track_id = \" << track_id_state_id_time[state_id].track_id << \", state_id = \" << state_id << std::endl;\n                    track_id_state_id_time[state_id].track_id = -1; // remove bbox\n                }\n            }\n        }\n    }\n\n    tst_t get_state_id(bbox_t find_box, std::vector<bool> &busy_vec)\n    {\n        tst_t tst;\n        tst.state_id = -1;\n\n        float min_dist = std::numeric_limits<float>::max();\n\n        for (size_t i = 0; i < max_objects; ++i)\n        {\n            if (track_id_state_id_time[i].track_id > -1 && result_vec_pred[i].obj_id == find_box.obj_id && busy_vec[i] == false)\n            {\n                bbox_t pred_box = result_vec_pred[i];\n\n                float dist = get_distance(pred_box.x, pred_box.y, find_box.x, find_box.y);\n\n                float movement_dist = std::max(max_dist, static_cast<float>(std::max(pred_box.w, pred_box.h)) );\n\n                if ((dist < movement_dist) && (dist < min_dist)) {\n                    min_dist = dist;\n                    tst.state_id = i;\n                }\n            }\n        }\n\n        if (tst.state_id > -1) {\n            track_id_state_id_time[tst.state_id].last_time = std::chrono::steady_clock::now();\n            track_id_state_id_time[tst.state_id].detection_count = std::max(track_id_state_id_time[tst.state_id].detection_count + 2, 10);\n            tst = track_id_state_id_time[tst.state_id];\n            busy_vec[tst.state_id] = true;\n        }\n        else {\n            //std::cerr << \" Didn't find: obj_id = \" << find_box.obj_id << \", x = \" << find_box.x << \", y = \" << find_box.y <<\n            //    \", track_id_counter = \" << track_id_counter << std::endl;\n        }\n\n        return tst;\n    }\n\n    tst_t new_state_id(std::vector<bool> &busy_vec)\n    {\n        tst_t tst;\n        // find empty cell to add new track_id\n        auto it = std::find_if(track_id_state_id_time.begin(), track_id_state_id_time.end(), [&](tst_t &v) { return v.track_id == -1; });\n        if (it != track_id_state_id_time.end()) {\n            it->state_id = it - track_id_state_id_time.begin();\n            //it->track_id = track_id_counter++;\n            it->track_id = 0;\n            it->last_time = std::chrono::steady_clock::now();\n            it->detection_count = 1;\n            tst = *it;\n            busy_vec[it->state_id] = true;\n        }\n\n        return tst;\n    }\n\n    std::vector<tst_t> find_state_ids(std::vector<bbox_t> result_vec)\n    {\n        std::vector<tst_t> tst_vec(result_vec.size());\n\n        std::vector<bool> busy_vec(max_objects, false);\n\n        for (size_t i = 0; i < result_vec.size(); ++i)\n        {\n            tst_t tst = get_state_id(result_vec[i], busy_vec);\n            int state_id = tst.state_id;\n            int track_id = tst.track_id;\n\n            // if new state_id\n            if (state_id < 0) {\n                tst = new_state_id(busy_vec);\n                state_id = tst.state_id;\n                track_id = tst.track_id;\n                if (state_id > -1) {\n                    kalman_vec[state_id].set(result_vec[i]);\n                    //std::cerr << \" post: \";\n                }\n            }\n\n            //std::cerr << \" track_id = \" << track_id << \", state_id = \" << state_id <<\n            //    \", x = \" << result_vec[i].x << \", det_count = \" << tst.detection_count << std::endl;\n\n            if (state_id > -1) {\n                tst_vec[i] = tst;\n                result_vec_pred[state_id] = result_vec[i];\n                result_vec_pred[state_id].track_id = track_id;\n            }\n        }\n\n        return tst_vec;\n    }\n\n    std::vector<bbox_t> predict()\n    {\n        clear_old_states();\n        std::vector<bbox_t> result_vec;\n\n        for (size_t i = 0; i < max_objects; ++i)\n        {\n            tst_t tst = track_id_state_id_time[i];\n            if (tst.track_id > -1) {\n                bbox_t box = kalman_vec[i].predict();\n\n                result_vec_pred[i].x = box.x;\n                result_vec_pred[i].y = box.y;\n                result_vec_pred[i].w = box.w;\n                result_vec_pred[i].h = box.h;\n\n                if (tst.detection_count >= min_frames)\n                {\n                    if (track_id_state_id_time[i].track_id == 0) {\n                        track_id_state_id_time[i].track_id = ++track_id_counter;\n                        result_vec_pred[i].track_id = track_id_counter;\n                    }\n\n                    result_vec.push_back(result_vec_pred[i]);\n                }\n            }\n        }\n        //std::cerr << \"         result_vec.size() = \" << result_vec.size() << std::endl;\n\n        //global_last_time = std::chrono::steady_clock::now();\n\n        return result_vec;\n    }\n\n\n    std::vector<bbox_t> correct(std::vector<bbox_t> result_vec)\n    {\n        calc_dt();\n        clear_old_states();\n\n        for (size_t i = 0; i < max_objects; ++i)\n            track_id_state_id_time[i].detection_count--;\n\n        std::vector<tst_t> tst_vec = find_state_ids(result_vec);\n\n        for (size_t i = 0; i < tst_vec.size(); ++i) {\n            tst_t tst = tst_vec[i];\n            int state_id = tst.state_id;\n            if (state_id > -1)\n            {\n                kalman_vec[state_id].set_delta_time(dT);\n                kalman_vec[state_id].correct(result_vec_pred[state_id]);\n            }\n        }\n\n        result_vec = predict();\n\n        global_last_time = std::chrono::steady_clock::now();\n\n        return result_vec;\n    }\n\n};\n// ----------------------------------------------\n#endif    // OPENCV\n\n#endif    // __cplusplus\n\n#endif    // YOLO_V2_CLASS_HPP\n"
  },
  {
    "path": "server/train/train.txt",
    "content": ""
  },
  {
    "path": "util/171204_pose3_info.txt",
    "content": "0, 0, 10\n1, 10, 13\n0, 20, 25\n3, 37, 44\n3, 56, 60\n0, 116, 136\n1, 147, 150\n1, 162, 165\n3, 192, 199\n3, 211, 215\n0, 272, 302\n1, 303, 305\n"
  },
  {
    "path": "util/171204_pose5_info.txt",
    "content": "1, 6, 9\n0, 17, 23\n3, 33, 41\n3, 53, 56\n0, 114, 144\n1, 145, 148\n1, 160, 164\n0, 173, 177\n3, 188, 195\n3, 208, 211\n0, 268, 297\n1, 298, 301\n1, 311, 315\n0, 323, 328\n3, 339, 347\n3, 359, 362\n0, 419, 449\n1, 450, 453\n1, 464, 466\n1, 468, 469\n0, 477, 481\n3, 492, 419\n3, 511, 515\n0, 572, 601\n1, 602, 606\n1, 617, 621\n0, 629, 634\n3, 645, 653\n3, 665, 669\n0, 726, 754\n1, 757, 759\n1, 776, 781\n0, 789, 792\n3, 805, 811\n3, 822, 827\n0, 885, 905"
  },
  {
    "path": "util/171204_pose6_info.txt",
    "content": "0, 1, 10\n1, 11, 15\n0, 23, 28\n3, 39, 46\n3, 58, 60\n0, 119, 149\n1, 150, 152\n1, 165, 170\n0, 178, 182\n3, 195, 211\n3, 213, 217\n0, 275, 303\n1, 304, 306\n1, 320, 326\n0, 335, 338\n3, 349, 355\n0, 432, 458\n1, 461, 463\n1, 472, 478\n0, 485, 489\n3, 501, 508\n3, 521, 523\n0, 582, 609\n1, 612, 615\n1, 627, 631\n0, 641, 644\n3, 655, 663\n3, 675, 678\n0, 736, 763\n1, 766, 768\n"
  },
  {
    "path": "util/concat.py",
    "content": "'''\nthis python script for MHAD dataset\nTo run this script is required ffmpeg-python(https://github.com/kkroening/ffmpeg-python)\n'''\n\nimport glob\nimport ffmpeg\n\nclusters = glob.glob(\"./Cluster*\")\n\nffmpeg_path = \"C:\\\\ffmpeg\\\\bin\\\\ffmpeg.exe\"\noutput_path = \".\\\\output\\\\\"\noutput_count = 1\nfor cl in clusters:\n    cameras = glob.glob(cl + \"\\\\Cam*\")\n    for ca in cameras:\n        subjects = glob.glob(ca + \"\\\\S*\")\n        for su in subjects:\n            targets = glob.glob(su + \"\\\\A04\\\\R03\\\\*.pgm\")\n            targets[0] = targets[0].replace(\"000.pgm\", \"%3d.pgm\")\n            in_stream = ffmpeg.input(targets[0])\n            in_stream.output(output_path + str(output_count).zfill(5) + \".mp4\", r = 25, pix_fmt=\"yuv420p\").run(ffmpeg_path, overwrite_output=True)\n            output_count += 1"
  },
  {
    "path": "util/cut.py",
    "content": "'''\nthis python script for CMU panoptic dataset\nTo run this script is required ffmpeg-python(https://github.com/kkroening/ffmpeg-python)\n'''\nimport glob\nimport ffmpeg\n\ninfo_file_name = \"./info.txt\"\n\nin_files = glob.glob(\"./*00_*.mp4\")\nin_file_streams = []\nfor file in in_files:\n    in_file_streams.append(ffmpeg.input(file))\n\nout_file_names = [\"1.mp4\", \"2.mp4\", \"3.mp4\", \"4.mp4\"]\nout_file_streams = [None] * len(out_file_names)\ninit_flags = [False] * len(out_file_names)\nfilestream = open(info_file_name, \"r\")\nfor line in filestream:\n    values = line[:-1].split(\",\")\n    action_type = int(values[0])\n    start = int(values[1])\n    end = int(values[2])\n    for in_stream in in_file_streams:\n        if init_flags[action_type] == False:\n            init_flags[action_type] = True\n            out_file_streams[action_type] = ffmpeg.concat(in_stream.trim(start=start, end=end).setpts('PTS-STARTPTS'))\n        else:\n            out_file_streams[action_type] = ffmpeg.concat(out_file_streams[action_type], in_stream.trim(start=start, end=end).setpts('PTS-STARTPTS'))\n\n\nfor i in range(len(out_file_names)):\n    if i == 0:\n        continue\n    if init_flags[i] == False:\n        continue\n    out = ffmpeg.output(out_file_streams[i], out_file_names[i], s='720x480')\n    ffmpeg.run(out, 'C:\\\\ffmpeg\\\\bin\\\\ffmpeg.exe')"
  }
]