Full Code of imsoo/fight_detection for AI

master f6a435ce03bd cached
55 files
196.0 KB
59.9k tokens
152 symbols
1 requests
Download .txt
Showing preview only (210K chars total). Download the full file or copy to clipboard to get everything.
Repository: imsoo/fight_detection
Branch: master
Commit: f6a435ce03bd
Files: 55
Total size: 196.0 KB

Directory structure:
gitextract_t6v5ojlz/

├── LICENSE
├── README.md
├── client/
│   ├── darknet_client/
│   │   ├── Makefile
│   │   ├── darknet_client.vcxproj
│   │   ├── darknet_client.vcxproj.filters
│   │   ├── darknet_client.vcxproj.user
│   │   └── src/
│   │       ├── args.cpp
│   │       ├── args.hpp
│   │       ├── base64.cpp
│   │       ├── base64.hpp
│   │       ├── frame.cpp
│   │       ├── frame.hpp
│   │       ├── main.cpp
│   │       ├── mem_pool.cpp
│   │       ├── mem_pool.hpp
│   │       ├── share_queue.hpp
│   │       ├── util.cpp
│   │       └── util.hpp
│   └── darknet_client.sln
├── server/
│   ├── Makefile
│   ├── action.py
│   ├── action_train.py
│   ├── cfg/
│   │   └── openpose.cfg
│   ├── src/
│   │   ├── DetectorInterface.hpp
│   │   ├── Hungarian.cpp
│   │   ├── Hungarian.h
│   │   ├── KalmanTracker.cpp
│   │   ├── KalmanTracker.h
│   │   ├── Tracker.cpp
│   │   ├── Tracker.hpp
│   │   ├── args.cpp
│   │   ├── args.hpp
│   │   ├── base64.cpp
│   │   ├── base64.h
│   │   ├── frame.cpp
│   │   ├── frame.hpp
│   │   ├── mem_pool.cpp
│   │   ├── mem_pool.hpp
│   │   ├── people.cpp
│   │   ├── people.hpp
│   │   ├── pose_detector.cpp
│   │   ├── pose_detector.hpp
│   │   ├── share_queue.h
│   │   ├── sink.cpp
│   │   ├── ventilator.cpp
│   │   ├── worker.cpp
│   │   └── yolo_v2_class.hpp
│   ├── train/
│   │   └── train.txt
│   └── weights/
│       ├── action.h5
│       └── action2.h5
└── util/
    ├── 171204_pose3_info.txt
    ├── 171204_pose5_info.txt
    ├── 171204_pose6_info.txt
    ├── concat.py
    └── cut.py

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2019 Seungsu Lim

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
### Overview
Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition. 

This project is based on [darknet_server](https://github.com/imsoo/darknet_server). if you want to run this experiment take a look how to build [here](https://github.com/imsoo/darknet_server#how-to-build). 


| ```Fight Detection System Pipeline``` |
|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71320889-4e737b80-24f5-11ea-8aac-4b4a527c6e64.png" width="100%" height="45%">|

### Pose Estimation and Object Tracking
Made pipeline to get 2D pose time series data in video sequence.

In worker process, Pose Estimation is performed using ***OpenPose***. Input image pass through the Pose_detector, and get the people object which packed up people joint coordinates. People object serialized and send to sink process.

In Sink process, people object convert to person objects. and every person object are send to Tracker. Tracker receive person object and produces object identities Using ***SORT***(simple online and realtime tracking algorithm).

Finally, can get the joint time series data per each person. each person's Time series data is managed by queue container. so person object always maintain recent 32 frame.

| ```Tracking Pipeline``` | ```Time series data``` |
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71316697-6b895980-24b7-11ea-92a1-33ec0dcd996a.png" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71302619-b5f3d300-23f0-11ea-9ab0-36791dd9aa48.png" width="150%" height="30%">|

* #### Examples of Result (Pose Estimation and Object Tracking)

|<img src="https://user-images.githubusercontent.com/11255376/71260111-64f6c700-237d-11ea-918c-1e8d9f05d963.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71260228-b3a46100-237d-11ea-9116-b050841a760b.gif" width="150%" height="30%">|
|:---:|:---:|


### Collect action video
Collected Four Action type (Standing, Walking, Punching, Kicking) video.

Punching Action type video is a subset of the ***Berkeley Multimodal Human Action Database (MHAD) dataset***.
This video data is comprised of 12 subjects doing the punching actions for 5 repetitions, filmed from 4 angles. (http://tele-immersion.citris-uc.org/berkeley_mhad)

Others (Standing, Walking, Kicking) are subsets of the ***CMU Panoptic Dataset***. In Range of Motion videos(171204_pose3, 171204_pose5, 171204_pose6), cut out 3 action type. I recorded timestamp per action type and cut out the video using python script(util/concat.py). This video data is comprised of 13 subjects doing the three actions, filmed from 31 angles. (http://domedb.perception.cs.cmu.edu/index.html)

``` jsonc
0, 1, 10    // 0 : Standing, 1 : begin timestamp, 10: end timestamp
1, 11, 15   // 1 : Walking, 11 : begin timestamp, 15: end timestamp
3, 39, 46   // 3 : Kicking, 39 : begin timestamp, 46: end timestamp
```

* #### Examples of Dataset (Stand & Walk)
<table>
<tr><th><code>Stand (CMU Panoptic Dataset)</code></th><th><code>Walk (CMU Panoptic Dataset)</code></th>
<tr valign="middle">
<td>

|<img src="https://user-images.githubusercontent.com/11255376/71252302-f65b3e80-2367-11ea-8718-a25a0ac7f14b.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71252304-f6f3d500-2367-11ea-9b8d-f5eff5b5959c.gif" width="150%" height="30%">|
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71252305-f78c6b80-2367-11ea-868e-988a94d28bb7.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71252307-f8250200-2367-11ea-9ab0-b1fc29672555.gif" width="150%" height="30%">|
</td>
<td>

|<img src="https://user-images.githubusercontent.com/11255376/71253038-0aa03b00-236a-11ea-99ae-80cd7bd1a284.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71253041-0b38d180-236a-11ea-834e-04c4c96a1974.gif" width="150%" height="30%">|
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71253042-0bd16800-236a-11ea-9a15-a0183d3be629.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71253043-0d029500-236a-11ea-94c2-993b969bb57c.gif" width="150%" height="30%">|
</td>
</tr>
</table>

* #### Examples of Dataset (Punch & Kick)
<table>
<tr><th><code>Punch (Berkeley MHAD Dataset)</code></th><th><code>Kick (CMU Panoptic Dataset)</code></th>
<tr>
<td>

|<img src="https://user-images.githubusercontent.com/11255376/71253986-cc584b00-236c-11ea-90e7-ff0934ffe38e.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71254000-e2660b80-236c-11ea-8aee-fb874a2d5365.gif" width="150%" height="30%">|
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71254136-5f918080-236d-11ea-9730-3368f3d4b143.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71254138-602a1700-236d-11ea-8077-848ff1233b8c.gif" width="150%" height="30%">|

</td>
<td>

|<img src="https://user-images.githubusercontent.com/11255376/71253059-17bd2a00-236a-11ea-895f-ccfd3bf3fc14.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71253060-18ee5700-236a-11ea-9f23-e0b0e600d5f4.gif" width="150%" height="30%">|
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71253062-1986ed80-236a-11ea-9699-8c5a5d2670b4.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71253066-1ab81a80-236a-11ea-988e-e1f1cf301031.gif" width="150%" height="30%">|

</td>
</tr>
</table>

### Make training dataset
Put action video data to tracking pipeline and get joint time series data per each person. this results (joint position) are processed to feature vector.

* ***Angle*** : current frame joint angle
* ***ΔPoint*** : A distance of prior frame joint point and current frame
* ***ΔAngle*** : A change of prior frame joint angle and current frame.

* #### Examples of feature vector (***ΔPoint*** & ***ΔAngle***)

| ***ΔPoint*** | ***ΔAngle*** |
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71304150-f3af2680-2405-11ea-8837-7c741b358956.gif" width="130%" height="50%">|<img src="https://user-images.githubusercontent.com/11255376/71304149-f1e56300-2405-11ea-8699-acece9a11722.gif" width="130%" height="50%">|

* #### Overview of feature vector

<table>
<tr><th><code>Feature Vector</code></th><th><code>OpenPose COCO output format</code></th>
<tr>
<td width="66%">

| *IDX* | *0* | *1*  | *2* | *3* | *4* | *5* | *6* | *7* |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| ***Angle*** | 2-3 | 3-4 | 5-6 | 6-7 | 8-9 | 9-10 | 11-12 | 12-13 |
| ***ΔAngle*** | 2-3 | 3-4 | 5-6 | 6-7 | 8-9 | 9-10 | 11-12 | 12-13 |
| ***ΔPoint*** | 3 | 4 | 6 | 7 | 9 | 10 | 12 | 13 |

###### *※ 2 : RShoulder, 3 : RElbow, 4 : RWrist, 5 : LShoulder, 6 : LElbow, 7 : LWrist, 8 : RHip, 9 : RKnee, 10 : RAnkle, 11 : LHip, 12 : LKnee, 13 : LAnkle*
</td>
<td width="34%">

|<img src="https://user-images.githubusercontent.com/11255376/71308335-866bb780-243e-11ea-9593-e13d80b15059.png" width="100%" height="30%">|
|:---:|

</td>
</tr>
</table>

Finally get each frame feature vector and then make action training data which consist of 32 frames feature vector. training datas are overlapped by 26 frames. so we got the four type action data set. 
A summary of the dataset  is:
* Standing : 7474 (7474 : pose3_stand) * 32 frame
* Walking : 4213 (854 : pose3_walk, 3359 : pose6_walk) * 32 frame
* Punching : 2187 (1115 : mhad_punch. 1072 : mhad_punch_flip) * 32 frame
* Kicking : 4694 (2558 : pose3_kick, 2136 : pose6_kick) * 32 frame
* total : 18573 * 32 frames (https://drive.google.com/open?id=1ZNJDzQUjo2lDPwGoVkRLg77eA57dKUqx)

|<img src="https://user-images.githubusercontent.com/11255376/71316454-1f3c1a80-24b3-11ea-9096-94e8cdc7adac.png" width="100%" height="50%">|
|:---:|

### RNN Training and Result
The network used in this experiment is based on that of :
* Guillaume Chevalier, 'LSTMs for Human Activity Recognition, 2016' https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
* stuarteiffert, 'RNN-for-Human-Activity-Recognition-using-2D-Pose-Input' https://github.com/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input

training was run for 300 epochs with a batch size of 1024. (weights/action.h5)

After training, To get action recognition result in real time, made action recognition pipeline. Sink process send each person's time series feature vector to action process as string. Action process put received data into RNN network and send back results of prediction. (0 : Standing, 1 : Walking, 2 : Punching, 3 : Kicking)

| ```Action Recognition Pipeline``` | ```RNN Model``` |
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71318418-ed877b80-24d3-11ea-993c-d776d8e980c4.png" width="100%" height="50%">|<img src="https://user-images.githubusercontent.com/11255376/71318651-c7afa600-24d6-11ea-8baa-23153316bee8.png" width="100%" height="50%">|


* #### Examples of Result (RNN Action Recognition)
| [```standing```](https://www.youtube.com/watch?v=Orc0Eq9bWOs) | [```Walking```](https://www.youtube.com/watch?v=Orc0Eq9bWOs) | [```Punching```](https://www.youtube.com/watch?v=kbgkeTTSau8) | [```Kicking```](https://www.youtube.com/watch?v=R1UWcG9N6tI) |
|:---:|:---:|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71256358-d6317c80-2373-11ea-8fd9-2ae1777c8a0f.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71256359-d6ca1300-2373-11ea-812a-babb3b5b2ad5.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71256361-d7fb4000-2373-11ea-8a17-26ce9f9dc8f5.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71256362-d893d680-2373-11ea-841c-ced2f1d4ba02.gif" width="150%" height="30%">|

### Fight Detection

This stage check that person who kick or punch is hitting someone. if some person has hit other, those people set enemy each other.
System count it as fight and then track them until they exist in frame.

| ```Fight Detection Pipeline``` |
|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71320794-cc368780-24f3-11ea-8928-8e920bc69f26.png" width="100%" height="50%">|

* #### Examples of Result
| [```Fighting Championship```](https://www.youtube.com/watch?v=cIhoK4cPbC4) | [```CCTV Video```](https://www.youtube.com/watch?v=stJPOb6zW7U) |
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71256826-54dae980-2375-11ea-808b-be89bfaea5c1.gif" width="150%" height="30%">|<img src="https://user-images.githubusercontent.com/11255376/71356085-809fde80-25c4-11ea-90a9-a63eef4d6629.gif" width="150%" height="30%">|

| [```Sparring video A```](https://www.youtube.com/watch?v=x0kJmieuFzI) | [```Sparring video B```](https://www.youtube.com/watch?v=x0kJmieuFzI) |
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71356417-821dd680-25c5-11ea-945a-d4dab5f9e1e0.gif" width="130%" height="60%">|<img src="https://user-images.githubusercontent.com/11255376/71356526-df198c80-25c5-11ea-98bf-11e4edf81430.gif" width="130%" height="60%">|

* #### Examples of Result (Failure case)
| [```Fake Person```](https://www.youtube.com/watch?v=kbgkeTTSau8)  | [```Small Person```](https://www.youtube.com/watch?v=aUdKzb4LGJI) | 
|:---:|:---:|
|<img src="https://user-images.githubusercontent.com/11255376/71256575-7daeaf00-2374-11ea-82dd-579a07788acc.gif" width="130%" height="50%">|<img src="https://user-images.githubusercontent.com/11255376/71257257-94ee9c00-2376-11ea-8940-659a7eae08b8.gif" width="130%" height="50%">|

### References
* #### Darknet : https://github.com/AlexeyAB/darknet
* #### OpenCV : https://github.com/opencv/opencv
* #### ZeroMQ : https://github.com/zeromq/libzmq
* #### json-c : https://github.com/json-c/json-c 
* #### openpose-darknet : https://github.com/lincolnhard/openpose-darknet
* #### sort-cpp : https://github.com/mcximing/sort-cpp
* #### cpp-base64 : https://github.com/ReneNyffenegger/cpp-base64
* #### mem_pool : https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it
* #### share_queue : https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe
* #### RNN-for-Human-Activity-Recognition-using-2D-Pose-Input : https://github.com/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input
* #### LSTM-Human-Activity-Recognition : https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition


================================================
FILE: client/darknet_client/Makefile
================================================
DEBUG = 1

CPP = g++
COMMON = -DOPENCV
CXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV
LDFLAGS = -lstdc++ -lpthread -lzmq -lrt -ltbb

CXXFLAGS += `pkg-config --cflags json-c`
CXXFLAGS += `pkg-config --cflags opencv`

LDFLAGS += `pkg-config --libs json-c`
LDFLAGS += `pkg-config --libs opencv`

ifeq ($(DEBUG), 1)
COMMON += -DDEBUG
endif

VPATH = ./src/
OBJDIR = ./obj/
DEPS = $(wildcard src/*.h*)

EXEC1 = darknet_client
EXEC1_OBJ = main.o frame.o mem_pool.o base64.o args.o util.o
EXEC1_OBJS = $(addprefix $(OBJDIR), $(EXEC1_OBJ))

OBJS = $(EXEC1_OBJS) 
EXECS = $(EXEC1) 

all: $(OBJDIR) $(EXECS)

$(EXEC1): $(EXEC1_OBJS)
	$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)

$(OBJDIR)%.o: %.cpp $(DEPS)
	$(CPP) $(COMMON) $(CXXFLAGS) -c $< -o $@ 

$(OBJDIR):
	mkdir -p $(OBJDIR) res

clean:
	rm -rf $(OBJS) $(EXECS) 


================================================
FILE: client/darknet_client/darknet_client.vcxproj
================================================
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ItemGroup Label="ProjectConfigurations">
    <ProjectConfiguration Include="Debug|Win32">
      <Configuration>Debug</Configuration>
      <Platform>Win32</Platform>
    </ProjectConfiguration>
    <ProjectConfiguration Include="Release|Win32">
      <Configuration>Release</Configuration>
      <Platform>Win32</Platform>
    </ProjectConfiguration>
    <ProjectConfiguration Include="Debug|x64">
      <Configuration>Debug</Configuration>
      <Platform>x64</Platform>
    </ProjectConfiguration>
    <ProjectConfiguration Include="Release|x64">
      <Configuration>Release</Configuration>
      <Platform>x64</Platform>
    </ProjectConfiguration>
  </ItemGroup>
  <ItemGroup>
    <ClCompile Include="src\args.cpp" />
    <ClCompile Include="src\base64.cpp" />
    <ClCompile Include="src\frame.cpp" />
    <ClCompile Include="src\main.cpp" />
    <ClCompile Include="src\mem_pool.cpp" />
    <ClCompile Include="src\util.cpp" />
  </ItemGroup>
  <ItemGroup>
    <ClInclude Include="src\args.hpp" />
    <ClInclude Include="src\base64.hpp" />
    <ClInclude Include="src\frame.hpp" />
    <ClInclude Include="src\mem_pool.hpp" />
    <ClInclude Include="src\share_queue.hpp" />
    <ClInclude Include="src\util.hpp" />
  </ItemGroup>
  <PropertyGroup Label="Globals">
    <VCProjectVersion>15.0</VCProjectVersion>
    <ProjectGuid>{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}</ProjectGuid>
    <RootNamespace>darknetclient</RootNamespace>
    <WindowsTargetPlatformVersion>10.0.17763.0</WindowsTargetPlatformVersion>
    <ProjectName>darknet_client</ProjectName>
  </PropertyGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
    <ConfigurationType>Application</ConfigurationType>
    <UseDebugLibraries>true</UseDebugLibraries>
    <PlatformToolset>v141</PlatformToolset>
    <CharacterSet>MultiByte</CharacterSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
    <ConfigurationType>Application</ConfigurationType>
    <UseDebugLibraries>false</UseDebugLibraries>
    <PlatformToolset>v141</PlatformToolset>
    <WholeProgramOptimization>true</WholeProgramOptimization>
    <CharacterSet>MultiByte</CharacterSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
    <ConfigurationType>Application</ConfigurationType>
    <UseDebugLibraries>true</UseDebugLibraries>
    <PlatformToolset>v141</PlatformToolset>
    <CharacterSet>MultiByte</CharacterSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
    <ConfigurationType>Application</ConfigurationType>
    <UseDebugLibraries>false</UseDebugLibraries>
    <PlatformToolset>v141</PlatformToolset>
    <WholeProgramOptimization>true</WholeProgramOptimization>
    <CharacterSet>MultiByte</CharacterSet>
  </PropertyGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
  <ImportGroup Label="ExtensionSettings">
  </ImportGroup>
  <ImportGroup Label="Shared">
  </ImportGroup>
  <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
    <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
  </ImportGroup>
  <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
    <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
  </ImportGroup>
  <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
    <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
  </ImportGroup>
  <ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
    <Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
  </ImportGroup>
  <PropertyGroup Label="UserMacros" />
  <PropertyGroup />
  <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
    <ClCompile>
      <WarningLevel>Level3</WarningLevel>
      <Optimization>Disabled</Optimization>
      <SDLCheck>true</SDLCheck>
      <ConformanceMode>true</ConformanceMode>
    </ClCompile>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
    <ClCompile>
      <WarningLevel>Level3</WarningLevel>
      <Optimization>Disabled</Optimization>
      <SDLCheck>true</SDLCheck>
      <ConformanceMode>true</ConformanceMode>
      <AdditionalIncludeDirectories>C:\opencv\build\include;C:\git\vcpkg\packages\zeromq_x64-windows-static\include</AdditionalIncludeDirectories>
      <PreprocessorDefinitions>ZMQ_STATIC;_MBCS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
    </ClCompile>
    <Link>
      <AdditionalDependencies>opencv_world340d.lib;C:\git\vcpkg\packages\zeromq_x64-windows-static\debug\lib\libzmq-mt-sgd-4_3_3.lib;Ws2_32.lib;Iphlpapi.lib;%(AdditionalDependencies)</AdditionalDependencies>
      <AdditionalLibraryDirectories>C:\opencv\build\x64\vc15\lib;</AdditionalLibraryDirectories>
    </Link>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
    <ClCompile>
      <WarningLevel>Level3</WarningLevel>
      <Optimization>MaxSpeed</Optimization>
      <FunctionLevelLinking>true</FunctionLevelLinking>
      <IntrinsicFunctions>true</IntrinsicFunctions>
      <SDLCheck>true</SDLCheck>
      <ConformanceMode>true</ConformanceMode>
    </ClCompile>
    <Link>
      <EnableCOMDATFolding>true</EnableCOMDATFolding>
      <OptimizeReferences>true</OptimizeReferences>
    </Link>
  </ItemDefinitionGroup>
  <ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
    <ClCompile>
      <WarningLevel>Level3</WarningLevel>
      <Optimization>MaxSpeed</Optimization>
      <FunctionLevelLinking>true</FunctionLevelLinking>
      <IntrinsicFunctions>true</IntrinsicFunctions>
      <SDLCheck>true</SDLCheck>
      <ConformanceMode>true</ConformanceMode>
      <AdditionalIncludeDirectories>C:\opencv\build\include;C:\git\vcpkg\packages\zeromq_x64-windows-static\include</AdditionalIncludeDirectories>
      <PreprocessorDefinitions>ZMQ_STATIC;_MBCS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>
    </ClCompile>
    <Link>
      <EnableCOMDATFolding>true</EnableCOMDATFolding>
      <OptimizeReferences>true</OptimizeReferences>
      <AdditionalDependencies>C:\opencv\build\x64\vc15\lib\opencv_world340.lib;C:\git\vcpkg\packages\zeromq_x64-windows-static\lib\libzmq-mt-s-4_3_3.lib;Ws2_32.lib;Iphlpapi.lib;%(AdditionalDependencies)</AdditionalDependencies>
      <AdditionalLibraryDirectories>C:\opencv\build\x64\vc15\lib;</AdditionalLibraryDirectories>
    </Link>
  </ItemDefinitionGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
  <ImportGroup Label="ExtensionTargets">
  </ImportGroup>
</Project>

================================================
FILE: client/darknet_client/darknet_client.vcxproj.filters
================================================
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ItemGroup>
    <Filter Include="소스 파일">
      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
    </Filter>
    <Filter Include="헤더 파일">
      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
      <Extensions>h;hh;hpp;hxx;hm;inl;inc;ipp;xsd</Extensions>
    </Filter>
    <Filter Include="리소스 파일">
      <UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
      <Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
    </Filter>
  </ItemGroup>
  <ItemGroup>
    <ClCompile Include="src\args.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
    <ClCompile Include="src\base64.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
    <ClCompile Include="src\frame.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
    <ClCompile Include="src\main.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
    <ClCompile Include="src\mem_pool.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
    <ClCompile Include="src\util.cpp">
      <Filter>소스 파일</Filter>
    </ClCompile>
  </ItemGroup>
  <ItemGroup>
    <ClInclude Include="src\args.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
    <ClInclude Include="src\base64.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
    <ClInclude Include="src\frame.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
    <ClInclude Include="src\mem_pool.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
    <ClInclude Include="src\share_queue.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
    <ClInclude Include="src\util.hpp">
      <Filter>헤더 파일</Filter>
    </ClInclude>
  </ItemGroup>
</Project>

================================================
FILE: client/darknet_client/darknet_client.vcxproj.user
================================================
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
    <LocalDebuggerEnvironment>PATH=C:\opencv\build\x64\vc15\bin;%PATH%</LocalDebuggerEnvironment>
    <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
    <LocalDebuggerCommandArguments>
    </LocalDebuggerCommandArguments>
    <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>
  </PropertyGroup>
</Project>

================================================
FILE: client/darknet_client/src/args.cpp
================================================
// https://github.com/pjreddie/template

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "args.hpp"

void del_arg(int argc, char **argv, int index)
{
  int i;
  for (i = index; i < argc - 1; ++i) argv[i] = argv[i + 1];
  argv[i] = 0;
}

int find_arg(int argc, char* argv[], const char *arg)
{
  int i;
  for (i = 0; i < argc; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      del_arg(argc, argv, i);
      return 1;
    }
  }
  return 0;
}

int find_int_arg(int argc, char **argv, const char *arg, int def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = atoi(argv[i + 1]);
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}

float find_float_arg(int argc, char **argv, const char *arg, float def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = atof(argv[i + 1]);
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}

const char *find_char_arg(int argc, char **argv, const char *arg, const char *def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = argv[i + 1];
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}


================================================
FILE: client/darknet_client/src/args.hpp
================================================
// https://github.com/pjreddie/template

#ifndef ARGS_H
#define ARGS_H

int find_arg(int argc, char* argv[], const char *arg);
int find_int_arg(int argc, char **argv, const char *arg, int def);
float find_float_arg(int argc, char **argv, const char *arg, float def);
const char *find_char_arg(int argc, char **argv, const char *arg, const char *def);

#endif

================================================
FILE: client/darknet_client/src/base64.cpp
================================================
/*
   base64.cpp and base64.h
   base64 encoding and decoding with C++.
   Version: 1.01.00
   Copyright (C) 2004-2017 René Nyffenegger
   This source code is provided 'as-is', without any express or implied
   warranty. In no event will the author be held liable for any damages
   arising from the use of this software.
   Permission is granted to anyone to use this software for any purpose,
   including commercial applications, and to alter it and redistribute it
   freely, subject to the following restrictions:
   1. The origin of this source code must not be misrepresented; you must not
      claim that you wrote the original source code. If you use this source code
      in a product, an acknowledgment in the product documentation would be
      appreciated but is not required.
   2. Altered source versions must be plainly marked as such, and must not be
      misrepresented as being the original source code.
   3. This notice may not be removed or altered from any source distribution.
   René Nyffenegger rene.nyffenegger@adp-gmbh.ch
*/

#include "base64.hpp"
#include <iostream>

static const std::string base64_chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz"
"0123456789+/";


static inline bool is_base64(unsigned char c) {
  return (isalnum(c) || (c == '+') || (c == '/'));
}

std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {
  std::string ret;
  int i = 0;
  int j = 0;
  unsigned char char_array_3[3];
  unsigned char char_array_4[4];

  while (in_len--) {
    char_array_3[i++] = *(bytes_to_encode++);
    if (i == 3) {
      char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
      char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
      char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
      char_array_4[3] = char_array_3[2] & 0x3f;

      for (i = 0; (i < 4); i++)
        ret += base64_chars[char_array_4[i]];
      i = 0;
    }
  }

  if (i)
  {
    for (j = i; j < 3; j++)
      char_array_3[j] = '\0';

    char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
    char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
    char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);

    for (j = 0; (j < i + 1); j++)
      ret += base64_chars[char_array_4[j]];

    while ((i++ < 3))
      ret += '=';

  }

  return ret;

}

std::string base64_decode(std::string const& encoded_string) {
  size_t in_len = encoded_string.size();
  int i = 0;
  int j = 0;
  int in_ = 0;
  unsigned char char_array_4[4], char_array_3[3];
  std::string ret;

  while (in_len-- && (encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {
    char_array_4[i++] = encoded_string[in_]; in_++;
    if (i == 4) {
      for (i = 0; i < 4; i++)
        char_array_4[i] = base64_chars.find(char_array_4[i]) & 0xff;

      char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
      char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
      char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];

      for (i = 0; (i < 3); i++)
        ret += char_array_3[i];
      i = 0;
    }
  }

  if (i) {
    for (j = 0; j < i; j++)
      char_array_4[j] = base64_chars.find(char_array_4[j]) & 0xff;

    char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
    char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);

    for (j = 0; (j < i - 1); j++) ret += char_array_3[j];
  }

  return ret;
}

================================================
FILE: client/darknet_client/src/base64.hpp
================================================
//
//  base64 encoding and decoding with C++.
//  Version: 1.01.00
//

#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A
#define BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A

#include <string>

std::string base64_encode(unsigned char const*, unsigned int len);
std::string base64_decode(std::string const& s);

#endif /* BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A */

================================================
FILE: client/darknet_client/src/frame.cpp
================================================
#include <sstream>	// for stringstream
#include <json-c/json.h>	// for json
#include <cstring> // for memcpy
#include "frame.hpp"
#include "base64.hpp"

Frame_pool::Frame_pool()
{
  mem_pool_msg = new CMemPool(MEM_POOL_UNIT_NUM, SEQ_BUF_LEN);
  mem_pool_seq = new CMemPool(MEM_POOL_UNIT_NUM, MSG_BUF_LEN);
  mem_pool_det = new CMemPool(MEM_POOL_UNIT_NUM, DET_BUF_LEN);
};

Frame_pool::Frame_pool(int unit_num)
{
  mem_pool_msg = new CMemPool(unit_num, SEQ_BUF_LEN);
  mem_pool_seq = new CMemPool(unit_num, MSG_BUF_LEN);
  mem_pool_det = new CMemPool(unit_num, DET_BUF_LEN);
};

Frame_pool::~Frame_pool()
{

};

Frame Frame_pool::alloc_frame(void) {
  Frame frame;
  frame_init(frame);
  return frame;
};

void Frame_pool::free_frame(Frame& frame) {
  mem_pool_seq->Free((void *)frame.seq_buf);
  mem_pool_msg->Free((void *)frame.msg_buf);
  mem_pool_det->Free((void *)frame.det_buf);
}

void Frame_pool::frame_init(Frame& frame) {
  frame.seq_len = frame.msg_len = frame.det_len = 0;
  frame.seq_buf = (unsigned char *)(mem_pool_seq->Alloc(SEQ_BUF_LEN, true));
  frame.msg_buf = (unsigned char *)(mem_pool_msg->Alloc(MSG_BUF_LEN, true));
  frame.det_buf = (unsigned char *)(mem_pool_det->Alloc(DET_BUF_LEN, true));
};

int frame_to_json(void* buf, const Frame& frame) {
  std::stringstream ss;
  ss << "{\n\"seq\":\"" << base64_encode((unsigned char *)frame.seq_buf, frame.seq_len) << "\",\n"
    << "\"msg\": \"" << base64_encode((unsigned char*)(frame.msg_buf), frame.msg_len) << "\",\n"
    << "\"det\": \"" << base64_encode((unsigned char*)(frame.det_buf), frame.det_len)
    << "\"\n}";

  std::memcpy(buf, ss.str().c_str(), ss.str().size());
  ((unsigned char*)buf)[ss.str().size()] = '\0';
  return ss.str().size();
};

void json_to_frame(void* buf, Frame& frame) {
  json_object *raw_obj;
  raw_obj = json_tokener_parse((const char*)buf);

  json_object *seq_obj = json_object_object_get(raw_obj, "seq");
  json_object *msg_obj = json_object_object_get(raw_obj, "msg");
  json_object *det_obj = json_object_object_get(raw_obj, "det");


  std::string seq(base64_decode(json_object_get_string(seq_obj)));
  std::string msg(base64_decode(json_object_get_string(msg_obj)));
  std::string det(base64_decode(json_object_get_string(det_obj)));

  frame.seq_len = seq.size();
  frame.msg_len = msg.size();
  frame.det_len = det.size();


  std::memcpy(frame.seq_buf, seq.c_str(), frame.seq_len);
  ((unsigned char*)frame.seq_buf)[frame.seq_len] = '\0';

  std::memcpy(frame.msg_buf, msg.c_str(), frame.msg_len);
  ((unsigned char*)frame.msg_buf)[frame.msg_len] = '\0';

  std::memcpy(frame.det_buf, det.c_str(), frame.det_len);
  ((unsigned char*)frame.det_buf)[frame.det_len] = '\0';

  //free
  json_object_put(seq_obj);
  json_object_put(msg_obj);
  json_object_put(det_obj);
};

================================================
FILE: client/darknet_client/src/frame.hpp
================================================
#ifndef __FRAME_HPP
#define __FRAME_HPP
#include "mem_pool.hpp"

struct Frame {
  int seq_len;
  int msg_len;
  int det_len;
  void *seq_buf;
  void *msg_buf;
  void *det_buf;
};

const int SEQ_BUF_LEN = 100;
const int MSG_BUF_LEN = 76800;
const int DET_BUF_LEN = 25600;
const int JSON_BUF_LEN = MSG_BUF_LEN * 2;
class Frame_pool
{
private:
  CMemPool *mem_pool_msg;
  CMemPool *mem_pool_seq;
  CMemPool *mem_pool_det;
  const int MEM_POOL_UNIT_NUM = 5000;

public:
  Frame_pool();
  Frame_pool(int unit_num);
  Frame alloc_frame(void);
  void free_frame(Frame& frame);
  void frame_init(Frame& frame);
  ~Frame_pool();
};

int frame_to_json(void* buf, const Frame& frame);
void json_to_frame(void* buf, Frame& frame);
#endif

================================================
FILE: client/darknet_client/src/main.cpp
================================================
#include <zmq.h>
#include <iostream>
#include <thread>
#include <queue>
#include <chrono>
#include <string>
#include <fstream>
#include <csignal>
#ifdef __linux__
#include <tbb/concurrent_priority_queue.h>
#elif _WIN32
#include <concurrent_priority_queue.h>
#endif
#include <opencv2/opencv.hpp>
#include "share_queue.hpp"
#include "frame.hpp"
#include "util.hpp"
#include "args.hpp"

#ifdef __linux__
#define FD_SETSIZE 4096
using namespace tbb;
#elif _WIN32
using namespace concurrency;
#endif
using namespace cv;
using namespace std;

// thread
void fetch_thread(void);
void capture_thread(void);
void recv_thread(void);
void output_show_thread(void);
void input_show_thread(void);

volatile bool fetch_flag = false;
volatile bool exit_flag = false;
volatile int final_exit_flag = 0;

// ZMQ
void *context;
void *sock_push;
void *sock_sub;

// pair
class ComparePair
{
public:
  bool operator()(pair<long, Frame> n1, pair<long, Frame> n2) {
    return n1.first > n2.first;
  }
};
Frame_pool *frame_pool;
concurrent_priority_queue<pair<long, Frame>, ComparePair> recv_queue;

// Queue
SharedQueue<Mat> cap_queue;
SharedQueue<Mat> fetch_queue;

// opencv
VideoCapture cap;
VideoWriter writer;
static Mat mat_show_output;
static Mat mat_show_input;
static Mat mat_recv;
static Mat mat_cap;
static Mat mat_fetch;

const int cap_width = 640;
const int cap_height = 480;
double delay;
long volatile show_frame = 1;
double end_frame;

// option
bool cam_input_flag;
bool vid_input_flag;
bool dont_show_flag;
bool json_output_flag;
bool vid_output_flag;

// output
string out_json_path;
string out_vid_path;

ofstream out_json_file;

void sig_handler(int s)
{
  exit_flag = true;
}

int main(int argc, char *argv[])
{
  if (argc < 2) {
    std::cerr << "Usage: " << argv[0] << " <-addr ADDR> <-cam CAM_NUM | -vid VIDEO_PATH> [-dont_show] [-out_json] [-out_vid] \n" << std::endl;
    return 0;
  }

  // install signal
  std::signal(SIGINT, sig_handler);

  // option init
  int cam_num = find_int_arg(argc, argv, "-cam", -1);
  if (cam_num != -1)
    cam_input_flag = true;

  const char *vid_def_path = "./test.mp4";
  const char *vid_path = find_char_arg(argc, argv, "-vid", vid_def_path);
  if (vid_path != vid_def_path)
    vid_input_flag = true;

  dont_show_flag = find_arg(argc, argv, "-dont_show");
  json_output_flag = find_arg(argc, argv, "-out_json");
  vid_output_flag = find_arg(argc, argv, "-out_vid");

  // frame_pool init
  frame_pool = new Frame_pool(5000);

  // ZMQ
  const char *addr = find_char_arg(argc, argv, "-addr", "127.0.0.1");
  context = zmq_ctx_new();

  sock_push = zmq_socket(context, ZMQ_PUSH);
  zmq_connect(sock_push, ((std::string("tcp://") + addr) + ":5575").c_str());

  sock_sub = zmq_socket(context, ZMQ_SUB);
  zmq_connect(sock_sub, ((std::string("tcp://") + addr) + ":5570").c_str());

  zmq_setsockopt(sock_sub, ZMQ_SUBSCRIBE, "", 0);

  if (vid_input_flag) {
    // VideoCaputre video
    cap = VideoCapture(vid_path);
    out_json_path = string(vid_path, strrchr(vid_path, '.')) + "_output.json";
    out_vid_path = string(vid_path, strrchr(vid_path, '.')) + "_output.mp4";
  }
  else if (cam_input_flag) {
    // VideoCapture cam
    cap = VideoCapture(cam_num);
    cap.set(CAP_PROP_FPS, 20);
    cap.set(CAP_PROP_BUFFERSIZE, 3);
    fetch_flag = true;
    out_json_path = "./cam_output.json";
    out_vid_path = "./cam_output.mp4";
  }
  else {
    // error
    std::cerr << "Usage: " << argv[0] << " <-addr ADDR> <-cam CAM_NUM | -vid VIDEO_PATH> [-dont_show] [-out_json] [-out_vid] \n" << std::endl;
    return 0;
  }

  if (!cap.isOpened()) {
    cerr << "Erro VideoCapture...\n";
    return -1;
  }

  double fps = cap.get(CAP_PROP_FPS);
  end_frame = cap.get(CAP_PROP_FRAME_COUNT);
  delay = cam_input_flag ? 1 : (1000.0 / fps);

  // read frame
  cap.read(mat_fetch);

  if (mat_fetch.empty()) {
    cerr << "Empty Mat Captured...\n";
    return 0;
  }

  mat_show_output = mat_fetch.clone();
  mat_show_input = mat_fetch.clone();

  // output init
  if (json_output_flag) {
    out_json_file = ofstream(out_json_path);
    if (!out_json_file.is_open()) {
      cerr << "output file : " << out_json_path << " open error \n";
      return 0;
    }
    out_json_file << "{\n \"det\": [\n";
  }

  if (vid_output_flag) {
    writer.open(out_vid_path, VideoWriter::fourcc('M', 'P', '4', 'V'), fps, Size(cap_width, cap_height), true);
    if (!writer.isOpened()) {
      cerr << "Erro VideoWriter...\n";
      return -1;
    }
  }

  // thread init
  thread thread_fetch(fetch_thread);
  thread_fetch.detach();

  while (!fetch_flag);

  thread thread_show_input(output_show_thread);
  thread thread_show_output(input_show_thread);
  thread thread_recv(recv_thread);
  thread thread_capture(capture_thread);

  thread_show_input.detach();
  thread_show_output.detach();
  thread_recv.detach();
  thread_capture.detach();

  while (final_exit_flag)
  {
    // for debug
    cout << "R : " << recv_queue.size() << " | C : " << cap_queue.size() << " | F : " << fetch_queue.size() << " | T : " << end_frame << " : " << show_frame << endl;
  }

  cap.release();

  if (json_output_flag) {
    out_json_file << "\n ]\n}";
    out_json_file.close();
  }

  if (vid_output_flag) {
    writer.release();
  }

  delete frame_pool;
  zmq_close(sock_sub);
  zmq_close(sock_push);
  zmq_ctx_destroy(context);

  return 0;
}

#define FETCH_THRESH 100
#define FETCH_WAIT_THRESH 30
#define FETCH_STATE 0
#define FETCH_WAIT 1
void fetch_thread(void) {
  volatile int fetch_state = FETCH_STATE;
  final_exit_flag += 1;
  while (!exit_flag) {

    switch (fetch_state) {
    case FETCH_STATE:
      if (cap.grab()) {

        cap.retrieve(mat_fetch);

        // push fetch queue
        fetch_queue.push_back(mat_fetch.clone());

        // if cam dont wait
        if (!cam_input_flag && (fetch_queue.size() > FETCH_THRESH)) {
          fetch_state = FETCH_WAIT;
        }
      }
      // if fetch end
      else {
        final_exit_flag -= 1;
        return;
      }
      break;
    case FETCH_WAIT:
      fetch_flag = true;
      if (fetch_queue.size() < FETCH_WAIT_THRESH) {
        fetch_state = FETCH_STATE;
      }
      break;
    }
  }
  final_exit_flag -= 1;
}

void capture_thread(void) {
  static vector<int> param = { IMWRITE_JPEG_QUALITY, 50 };
  static vector<uchar> encode_buf(JSON_BUF_LEN);

  volatile int frame_seq_num = 1;
  string frame_seq;

  // for json
  unsigned char json_buf[JSON_BUF_LEN];
  int send_json_len;

  Frame frame = frame_pool->alloc_frame();

  final_exit_flag += 1;
  while (!exit_flag) {
    if (fetch_queue.size() < 1)
      continue;

    // get input mat 
    mat_cap = fetch_queue.front().clone();
    fetch_queue.pop_front();

    if (mat_cap.empty()) {
      cerr << "Empty Mat Captured...\n";
      continue;
    }

    // resize
    resize(mat_cap, mat_cap, Size(cap_width, cap_height));

    // push to cap queue (for display input)
    cap_queue.push_back(mat_cap.clone());

    // mat to jpg
    imencode(".jpg", mat_cap, encode_buf, param);

    // jpg to json (seq + msg)
    frame_seq = to_string(frame_seq_num);
    frame.seq_len = frame_seq.size();
    memcpy(frame.seq_buf, frame_seq.c_str(), frame.seq_len);

    frame.msg_len = encode_buf.size();
    memcpy(frame.msg_buf, &encode_buf[0], frame.msg_len);

    send_json_len = frame_to_json(json_buf, frame);

    // send json to server
    zmq_send(sock_push, json_buf, send_json_len, 0);

    frame_seq_num++;
  }
  frame_pool->free_frame(frame);
  final_exit_flag -= 1;
}

void recv_thread(void) {
  int recv_json_len;
  int frame_seq_num = 1;
  Frame frame;
  unsigned char json_buf[JSON_BUF_LEN];

  final_exit_flag += 1;
  while (!exit_flag) {
    recv_json_len = zmq_recv(sock_sub, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);

    if (recv_json_len > 0) {
      frame = frame_pool->alloc_frame();
      json_buf[recv_json_len] = '\0';
      json_to_frame(json_buf, frame);

      frame_seq_num = str_to_int((const char *)frame.seq_buf, frame.seq_len);

      // push to recv_queue (for display output)
      pair<long, Frame> p = make_pair(frame_seq_num, frame);
      recv_queue.push(p);
    }
  }
  final_exit_flag -= 1;
}

#define DONT_SHOW 0
#define SHOW_START 1
#define DONT_SHOW_THRESH 2  // for buffering
#define SHOW_START_THRESH 1 // for buffering

int volatile show_state = DONT_SHOW;
void input_show_thread(void) {

  if (!dont_show_flag) {
    cvNamedWindow("INPUT");
    moveWindow("INPUT", 30, 130);
    cv::imshow("INPUT", mat_show_input);
  }

  final_exit_flag += 1;
  while (!exit_flag) {
    switch (show_state) {
    case DONT_SHOW:
      break;
    case SHOW_START:
      if (cap_queue.size() >= DONT_SHOW_THRESH) {
        mat_show_input = cap_queue.front().clone();
        cap_queue.pop_front();
      }
      break;
    }

    if (!dont_show_flag) {

      // draw text (INPUT) Left Upper corner  
      putText(mat_show_input, "INPUT", Point(10, 25),
        FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255), 2);

      cv::imshow("INPUT", mat_show_input);

      // wait key for exit
      if (waitKey(delay) >= 0)
        exit_flag = true;
    }
  }
  final_exit_flag -= 1;
}

void output_show_thread(void) {
  Frame frame;

  if (!dont_show_flag) {
    cvNamedWindow("OUTPUT");
    moveWindow("OUTPUT", 670, 130);
    cv::imshow("OUTPUT", mat_show_output);
  }

  final_exit_flag += 1;
  while (!exit_flag) {

    switch (show_state) {
    case DONT_SHOW:
      if (recv_queue.size() >= SHOW_START_THRESH) {
        show_state = SHOW_START;
      }
      break;
    case SHOW_START:
      if (recv_queue.size() >= DONT_SHOW_THRESH || (end_frame - show_frame) == 1) {
        pair<long, Frame> p;
        // try pop success
        while (1) {
          if (recv_queue.try_pop(p)) {
            // if right sequence
            if (p.first == show_frame) {

              frame = ((Frame)p.second);
              vector<uchar> decode_buf((unsigned char*)(frame.msg_buf), (unsigned char*)(frame.msg_buf) + frame.msg_len);

              // jpg to mat
              mat_show_output = imdecode(decode_buf, IMREAD_COLOR);

              // resize
              resize(mat_show_output, mat_recv, Size(cap_width, cap_height));

              // wirte out_json
              if (json_output_flag) {
                if (show_frame != 1)
                  out_json_file << ",\n";
                out_json_file.write((const char*)frame.det_buf, frame.det_len);
              }

              // write out_vid
              if (vid_output_flag)
                writer.write(mat_show_output);

              // free frame
              frame_pool->free_frame(frame);
              show_frame++;
            }
            // wrong sequence
            else {
              recv_queue.push(p);
            }
            break;
          }
        }
      }
      else {
        show_state = DONT_SHOW;
      }
      break;
    }

    if (show_frame == end_frame)
      exit_flag = true;

    if (!dont_show_flag) {
      // draw text (OUTPUT) Left Upper corner  
      putText(mat_show_output, "OUTPUT", Point(10, 25),
        FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0), 2);

      cv::imshow("OUTPUT", mat_show_output);

      // wait key for exit
      if (waitKey(delay) >= 0)
        exit_flag = true;
    }
  }
  final_exit_flag -= 1;
}


================================================
FILE: client/darknet_client/src/mem_pool.cpp
================================================
#include <stdio.h>
#include <stdlib.h>
#include "mem_pool.hpp"
/*==========================================================
CMemPool:
    Constructor of this class. It allocate memory block from system and create
    a static double linked list to manage all memory unit.

Parameters:
    [in]ulUnitNum
    The number of unit which is a part of memory block.

    [in]ulUnitSize
    The size of unit.
//=========================================================
*/
CMemPool::CMemPool(unsigned long ulUnitNum, unsigned long ulUnitSize) :
  m_pMemBlock(NULL), m_pAllocatedMemBlock(NULL), m_pFreeMemBlock(NULL),
  m_ulBlockSize(ulUnitNum * (ulUnitSize + sizeof(struct _Unit))),
  m_ulUnitSize(ulUnitSize)
{
  m_pMemBlock = malloc(m_ulBlockSize);     //Allocate a memory block.

  if (NULL != m_pMemBlock)
  {
    for (unsigned long i = 0; i < ulUnitNum; i++)  //Link all mem unit . Create linked list.
    {
      struct _Unit *pCurUnit = (struct _Unit *)((char *)m_pMemBlock + i * (ulUnitSize + sizeof(struct _Unit)));

      pCurUnit->pPrev = NULL;
      pCurUnit->pNext = m_pFreeMemBlock;    //Insert the new unit at head.

      if (NULL != m_pFreeMemBlock)
      {
        m_pFreeMemBlock->pPrev = pCurUnit;
      }
      m_pFreeMemBlock = pCurUnit;
    }
  }
}

/*===============================================================
~CMemPool():
    Destructor of this class. Its task is to free memory block.
//===============================================================
*/
CMemPool::~CMemPool()
{
  free(m_pMemBlock);
}

/*================================================================
Alloc:
    To allocate a memory unit. If memory pool can`t provide proper memory unit,
    It will call system function.

Parameters:
    [in]ulSize
    Memory unit size.

    [in]bUseMemPool
    Whether use memory pool.

Return Values:
    Return a pointer to a memory unit.
//=================================================================
*/
void* CMemPool::Alloc(unsigned long ulSize, bool bUseMemPool)
{
  if (ulSize > m_ulUnitSize || false == bUseMemPool ||
    NULL == m_pMemBlock || NULL == m_pFreeMemBlock)
  {
    return malloc(ulSize);
  }

  //Now FreeList isn`t empty
  struct _Unit *pCurUnit = m_pFreeMemBlock;
  m_pFreeMemBlock = pCurUnit->pNext;            //Get a unit from free linkedlist.
  if (NULL != m_pFreeMemBlock)
  {
    m_pFreeMemBlock->pPrev = NULL;
  }

  pCurUnit->pNext = m_pAllocatedMemBlock;

  if (NULL != m_pAllocatedMemBlock)
  {
    m_pAllocatedMemBlock->pPrev = pCurUnit;
  }
  m_pAllocatedMemBlock = pCurUnit;

  return (void *)((char *)pCurUnit + sizeof(struct _Unit));
}

/*================================================================
Free:
    To free a memory unit. If the pointer of parameter point to a memory unit,
    then insert it to "Free linked list". Otherwise, call system function "free".

Parameters:
    [in]p
    It point to a memory unit and prepare to free it.

Return Values:
    none
//================================================================
*/
void CMemPool::Free(void* p)
{
  if (m_pMemBlock < p && p < (void *)((char *)m_pMemBlock + m_ulBlockSize))
  {
    struct _Unit *pCurUnit = (struct _Unit *)((char *)p - sizeof(struct _Unit));

    m_pAllocatedMemBlock = pCurUnit->pNext;
    if (NULL != m_pAllocatedMemBlock)
    {
      m_pAllocatedMemBlock->pPrev = NULL;
    }

    pCurUnit->pNext = m_pFreeMemBlock;
    if (NULL != m_pFreeMemBlock)
    {
      m_pFreeMemBlock->pPrev = pCurUnit;
    }

    m_pFreeMemBlock = pCurUnit;
  }
  else
  {
    free(p);
  }
}

================================================
FILE: client/darknet_client/src/mem_pool.hpp
================================================
#ifndef __MEMPOOL_H__
#define __MEMPOOL_H__
// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it
class CMemPool
{
private:
  //The purpose of the structure`s definition is that we can operate linkedlist conveniently
  struct _Unit                     //The type of the node of linkedlist.
  {
    struct _Unit *pPrev, *pNext;
  };

  void* m_pMemBlock;                //The address of memory pool.

  //Manage all unit with two linkedlist.
  struct _Unit*    m_pAllocatedMemBlock; //Head pointer to Allocated linkedlist.
  struct _Unit*    m_pFreeMemBlock;      //Head pointer to Free linkedlist.

  unsigned long    m_ulUnitSize; //Memory unit size. There are much unit in memory pool.
  unsigned long    m_ulBlockSize;//Memory pool size. Memory pool is make of memory unit.

public:
  CMemPool(unsigned long lUnitNum = 50, unsigned long lUnitSize = 1024);
  ~CMemPool();

  void* Alloc(unsigned long ulSize, bool bUseMemPool = true); //Allocate memory unit
  void Free(void* p);                                   //Free memory unit
};

#endif //__MEMPOOL_H__


================================================
FILE: client/darknet_client/src/share_queue.hpp
================================================
#ifndef __SHARE_QUEUE_HPP
#define __SHARE_QUEUE_HPP

// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe
#include <queue>
#include <mutex>
#include <condition_variable>

template <typename T>
class SharedQueue
{
public:
  SharedQueue();
  ~SharedQueue();

  T& front();
  void pop_front();

  void push_back(const T& item);
  void push_back(T&& item);

  int size();
  bool empty();

private:
  std::deque<T> queue_;
  std::mutex mutex_;
  std::condition_variable cond_;
};

template <typename T>
SharedQueue<T>::SharedQueue() {}

template <typename T>
SharedQueue<T>::~SharedQueue() {}

template <typename T>
T& SharedQueue<T>::front()
{
  std::unique_lock<std::mutex> mlock(mutex_);
  while (queue_.empty())
  {
    cond_.wait(mlock);
  }
  return queue_.front();
}

template <typename T>
void SharedQueue<T>::pop_front()
{
  std::unique_lock<std::mutex> mlock(mutex_);
  while (queue_.empty())
  {
    cond_.wait(mlock);
  }
  queue_.pop_front();
}

template <typename T>
void SharedQueue<T>::push_back(const T& item)
{
  std::unique_lock<std::mutex> mlock(mutex_);
  queue_.push_back(item);
  mlock.unlock();     // unlock before notificiation to minimize mutex con
  cond_.notify_one(); // notify one waiting thread

}

template <typename T>
void SharedQueue<T>::push_back(T&& item)
{
  std::unique_lock<std::mutex> mlock(mutex_);
  queue_.push_back(std::move(item));
  mlock.unlock();     // unlock before notificiation to minimize mutex con
  cond_.notify_one(); // notify one waiting thread

}

template <typename T>
int SharedQueue<T>::size()
{
  std::unique_lock<std::mutex> mlock(mutex_);
  int size = queue_.size();
  mlock.unlock();
  return size;
}
#endif

================================================
FILE: client/darknet_client/src/util.cpp
================================================
#include "util.hpp"

// utility
int str_to_int(const char* str, int len)
{
  int i;
  int ret = 0;
  for (i = 0; i < len; ++i)
  {
    ret = ret * 10 + (str[i] - '0');
  }
  return ret;
}

================================================
FILE: client/darknet_client/src/util.hpp
================================================
#ifndef __UTIL_HPP
#define __UTIL_HPP

int str_to_int(const char* str, int len);

#endif

================================================
FILE: client/darknet_client.sln
================================================

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.28307.852
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "darknet_client", "darknet_client\darknet_client.vcxproj", "{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}"
EndProject
Global
	GlobalSection(SolutionConfigurationPlatforms) = preSolution
		Debug|x64 = Debug|x64
		Debug|x86 = Debug|x86
		Release|x64 = Release|x64
		Release|x86 = Release|x86
	EndGlobalSection
	GlobalSection(ProjectConfigurationPlatforms) = postSolution
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x64.ActiveCfg = Debug|x64
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x64.Build.0 = Debug|x64
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x86.ActiveCfg = Debug|Win32
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Debug|x86.Build.0 = Debug|Win32
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x64.ActiveCfg = Release|x64
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x64.Build.0 = Release|x64
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x86.ActiveCfg = Release|Win32
		{D8EF8C1B-C4C1-4C68-A033-31AD741BE59A}.Release|x86.Build.0 = Release|Win32
	EndGlobalSection
	GlobalSection(SolutionProperties) = preSolution
		HideSolutionNode = FALSE
	EndGlobalSection
	GlobalSection(ExtensibilityGlobals) = postSolution
		SolutionGuid = {FA14131C-5C1C-41BF-AEB9-201B8E8E0B1E}
	EndGlobalSection
EndGlobal


================================================
FILE: server/Makefile
================================================
DEBUG = 1

CPP = g++
COMMON = -DOPENCV
CXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV
LDFLAGS = -lstdc++ -lpthread -lzmq -lrt -ltbb -ldarknet -lboost_serialization

CXXFLAGS += `pkg-config --cflags json-c`
CXXFLAGS += `pkg-config --cflags opencv`

LDFLAGS += `pkg-config --libs json-c`
LDFLAGS += `pkg-config --libs opencv`

ifeq ($(DEBUG), 1)
COMMON += -DDEBUG
endif

VPATH = ./src/
OBJDIR = ./obj/
DEPS = $(wildcard src/*.h*)

EXEC1 = ventilator
EXEC1_OBJ = ventilator.o frame.o mem_pool.o base64.o
EXEC1_OBJS = $(addprefix $(OBJDIR), $(EXEC1_OBJ))

EXEC2 = worker
EXEC2_OBJ = worker.o people.o pose_detector.o frame.o mem_pool.o base64.o args.o
EXEC2_OBJS = $(addprefix $(OBJDIR), $(EXEC2_OBJ))

EXEC3 = sink
EXEC3_OBJ = sink.o people.o Tracker.o Hungarian.o KalmanTracker.o frame.o mem_pool.o base64.o
EXEC3_OBJS = $(addprefix $(OBJDIR), $(EXEC3_OBJ))

OBJS = $(EXEC1_OBJS) $(EXEC2_OBJS) $(EXEC3_OBJS)
EXECS = $(EXEC1) $(EXEC2) $(EXEC3)
INPROCS = processed unprocessed action

all: $(OBJDIR) $(EXECS)

$(EXEC1): $(EXEC1_OBJS)
	$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)

$(EXEC2): $(EXEC2_OBJS)
	$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)

$(EXEC3): $(EXEC3_OBJS)
	$(CPP) $(COMMON) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)

$(OBJDIR)%.o: %.cpp $(DEPS)
	$(CPP) $(COMMON) $(CXXFLAGS) -c $< -o $@ 

$(OBJDIR):
	mkdir -p $(OBJDIR) cfg weights names train

clean:
	rm -rf $(OBJS) $(EXECS) $(INPROCS)


================================================
FILE: server/action.py
================================================
import tensorflow as tf
import numpy as np
import zmq
import io
import time
from tensorflow import keras

from tensorflow.keras.backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
sess.run(tf.global_variables_initializer())

'''
## TF 2.0
from tensorflow.compat.v1.keras.backend import set_session
tf.compat.v1.disable_eager_execution()
config = tf.compat.v1.ConfigProto()  
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
set_session(sess)
sess.run(tf.compat.v1.global_variables_initializer())
'''

model = keras.models.load_model("weights/action.h5")

n_input = 24  # num input parameters per timestep
n_steps = 32
n_hidden = 34 # Hidden layer num of features
n_classes = 4 
batch_size = 1024

def load_X(msg):
  buf = io.StringIO(msg)
  X_ = np.array(
      [elem for elem in [
      row.split(',') for row in buf
      ]], 
      dtype=np.float32
      )
  blocks = int(len(X_) / 32)
  X_ = np.array(np.split(X_,blocks))
  return X_ 


# load
input_ = np.zeros((batch_size, n_steps, n_input), dtype=np.float32)
print("model loaded ...")

context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("ipc://action")

while True:
  msg = socket.recv()
  msg = msg.decode("utf-8")
  recv_ = load_X(msg)

  for i in range(len(recv_)):
    input_[i] = recv_[i]
  startTime = time.time()
  pred = model.predict_classes(input_, batch_size = batch_size)
  
  endTime = time.time() - startTime
  print("time : ", endTime)
  pred_str = ""

  for i in range(len(recv_)):
    pred_str += str(pred[i])
  print("result : ", pred_str)
  socket.send_string(pred_str)


================================================
FILE: server/action_train.py
================================================
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

# Set parameter
n_input = 24  # num input parameters per timestep
n_steps = 32
n_hidden = 34 # Hidden layer num of features
n_classes = 4 
batch_size = 1024
lambda_loss_amount = 0.0015
learning_rate = 0.0025
decay_rate = 0.02
training_epochs = 300

###
def load_X(X_path):
    file = open(X_path, 'r')
    X_ = np.array(
        [elem for elem in [
            row.split(',') for row in file
        ]], 
        dtype=np.float32
    )
    file.close()
    blocks = int(len(X_) / n_steps)
    X_ = np.array(np.split(X_,blocks))
    return X_ 

def load_y(y_path):
    file = open(y_path, 'r')
    y_ = np.array(
        [elem for elem in [
            row.replace('  ', ' ').strip().split(' ') for row in file
        ]], 
        dtype=np.int32
    )
    file.close()
    # for 0-based indexing 
    return y_ - 1


DATASET_PATH = "train/"

X_train_path = DATASET_PATH + "pose36.txt"
X_test_path = DATASET_PATH + "pose36_test.txt"

y_train_path = DATASET_PATH + "pose36_c.txt"
y_test_path = DATASET_PATH + "pose36_test_c.txt"


X_train = load_X(X_train_path)
X_test = load_X(X_test_path)
#print X_test

y_train = load_y(y_train_path)
y_test = load_y(y_test_path)
###

model = tf.keras.Sequential([
   # relu activation
   layers.Dense(n_hidden, activation='relu', 
       kernel_initializer='random_normal', 
       bias_initializer='random_normal',
       batch_input_shape=(batch_size, n_steps, n_input)
   ),
   
   # cuDNN
   layers.CuDNNLSTM(n_hidden, return_sequences=True,  unit_forget_bias=1.0),
   layers.CuDNNLSTM(n_hidden,  unit_forget_bias=1.0),
   
   # layers.LSTM(n_hidden, return_sequences=True,  unit_forget_bias=1.0),
   # layers.LSTM(n_hidden,  unit_forget_bias=1.0),

   layers.Dense(n_classes, kernel_initializer='random_normal', 
       bias_initializer='random_normal',
       kernel_regularizer=tf.keras.regularizers.l2(lambda_loss_amount),
       bias_regularizer=tf.keras.regularizers.l2(lambda_loss_amount),
       activation='softmax'
   )
])

model.compile(
   optimizer=tf.keras.optimizers.Adam(lr=learning_rate, decay=decay_rate),
   metrics=['accuracy'],
   loss='categorical_crossentropy'
)

y_train_one_hot = keras.utils.to_categorical(y_train, 4)
y_test_one_hot = keras.utils.to_categorical(y_test, 4)

train_size = X_train.shape[0] - X_train.shape[0] % batch_size
test_size = X_test.shape[0] - X_test.shape[0] % batch_size

history = model.fit(
   X_train[:train_size,:,:], 
   y_train_one_hot[:train_size,:], 
   epochs=training_epochs,
   batch_size=batch_size,
   validation_data=(X_test[:test_size,:,:], y_test_one_hot[:test_size,:])
)

from tensorflow.keras.models import load_model
model.save('weights/action.h5')


================================================
FILE: server/cfg/openpose.cfg
================================================
[net]
width=200
height=200
channels=3

[convolutional]
batch_normalize=0
filters=64
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=64
size=3
stride=1
pad=1
activation=relu

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=0
filters=256
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=256
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=256
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=256
size=3
stride=1
pad=1
activation=relu

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=0
filters=512
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=512
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=256
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

#######

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=512
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-6

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=3
stride=1
pad=1
activation=relu

[convolutional]
batch_normalize=0
filters=512
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-7,-1,-12

###concat_stage2###

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-8

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-9,-1,-28

###concat_stage3###

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-8

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-9,-1,-44

###concat_stage4###

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-8

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-9,-1,-60

###concat_stage5###

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-8

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-9,-1,-76

###concat_stage6###

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=38
size=1
stride=1
pad=0
activation=linear

[route]
layers=-8

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=7
stride=1
pad=3
activation=relu

[convolutional]
batch_normalize=0
filters=128
size=1
stride=1
pad=0
activation=relu

[convolutional]
batch_normalize=0
filters=19
size=1
stride=1
pad=0
activation=linear

[route]
layers=-1,-9


================================================
FILE: server/src/DetectorInterface.hpp
================================================
#ifndef __DET_INTERFACE
#define __DET_INTERFACE
#include <string>
#include <opencv2/opencv.hpp>

class DetectorInterface {
public:
  virtual void detect(cv::Mat, float thresh) = 0;
  virtual void draw(cv::Mat) = 0;
  virtual std::string det_to_json(int frame_id) = 0;
};
  
#endif


================================================
FILE: server/src/Hungarian.cpp
================================================
///////////////////////////////////////////////////////////////////////////////
// Hungarian.cpp: Implementation file for Class HungarianAlgorithm.
// 
// This is a C++ wrapper with slight modification of a hungarian algorithm implementation by Markus Buehren.
// The original implementation is a few mex-functions for use in MATLAB, found here:
// http://www.mathworks.com/matlabcentral/fileexchange/6543-functions-for-the-rectangular-assignment-problem
// 
// Both this code and the orignal code are published under the BSD license.
// by Cong Ma, 2016
// 

#include "Hungarian.h"


HungarianAlgorithm::HungarianAlgorithm(){}
HungarianAlgorithm::~HungarianAlgorithm(){}


//********************************************************//
// A single function wrapper for solving assignment problem.
//********************************************************//
double HungarianAlgorithm::Solve(vector<vector<double> >& DistMatrix, vector<int>& Assignment)
{
	unsigned int nRows = DistMatrix.size();
	unsigned int nCols = DistMatrix[0].size();

	double *distMatrixIn = new double[nRows * nCols];
	int *assignment = new int[nRows];
	double cost = 0.0;

	// Fill in the distMatrixIn. Mind the index is "i + nRows * j".
	// Here the cost matrix of size MxN is defined as a double precision array of N*M elements. 
	// In the solving functions matrices are seen to be saved MATLAB-internally in row-order.
	// (i.e. the matrix [1 2; 3 4] will be stored as a vector [1 3 2 4], NOT [1 2 3 4]).
	for (unsigned int i = 0; i < nRows; i++)
		for (unsigned int j = 0; j < nCols; j++)
			distMatrixIn[i + nRows * j] = DistMatrix[i][j];
	
	// call solving function
	assignmentoptimal(assignment, &cost, distMatrixIn, nRows, nCols);

	Assignment.clear();
	for (unsigned int r = 0; r < nRows; r++)
		Assignment.push_back(assignment[r]);

	delete[] distMatrixIn;
	delete[] assignment;
	return cost;
}


//********************************************************//
// Solve optimal solution for assignment problem using Munkres algorithm, also known as Hungarian Algorithm.
//********************************************************//
void HungarianAlgorithm::assignmentoptimal(int *assignment, double *cost, double *distMatrixIn, int nOfRows, int nOfColumns)
{
	double *distMatrix, *distMatrixTemp, *distMatrixEnd, *columnEnd, value, minValue;
	bool *coveredColumns, *coveredRows, *starMatrix, *newStarMatrix, *primeMatrix;
	int nOfElements, minDim, row, col;

	/* initialization */
	*cost = 0;
	for (row = 0; row<nOfRows; row++)
		assignment[row] = -1;

	/* generate working copy of distance Matrix */
	/* check if all matrix elements are positive */
	nOfElements = nOfRows * nOfColumns;
	distMatrix = (double *)malloc(nOfElements * sizeof(double));
	distMatrixEnd = distMatrix + nOfElements;

	for (row = 0; row<nOfElements; row++)
	{
		value = distMatrixIn[row];
		if (value < 0)
			cerr << "All matrix elements have to be non-negative." << endl;
		distMatrix[row] = value;
	}


	/* memory allocation */
	coveredColumns = (bool *)calloc(nOfColumns, sizeof(bool));
	coveredRows = (bool *)calloc(nOfRows, sizeof(bool));
	starMatrix = (bool *)calloc(nOfElements, sizeof(bool));
	primeMatrix = (bool *)calloc(nOfElements, sizeof(bool));
	newStarMatrix = (bool *)calloc(nOfElements, sizeof(bool)); /* used in step4 */

	/* preliminary steps */
	if (nOfRows <= nOfColumns)
	{
		minDim = nOfRows;

		for (row = 0; row<nOfRows; row++)
		{
			/* find the smallest element in the row */
			distMatrixTemp = distMatrix + row;
			minValue = *distMatrixTemp;
			distMatrixTemp += nOfRows;
			while (distMatrixTemp < distMatrixEnd)
			{
				value = *distMatrixTemp;
				if (value < minValue)
					minValue = value;
				distMatrixTemp += nOfRows;
			}

			/* subtract the smallest element from each element of the row */
			distMatrixTemp = distMatrix + row;
			while (distMatrixTemp < distMatrixEnd)
			{
				*distMatrixTemp -= minValue;
				distMatrixTemp += nOfRows;
			}
		}

		/* Steps 1 and 2a */
		for (row = 0; row<nOfRows; row++)
			for (col = 0; col<nOfColumns; col++)
				if (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON)
					if (!coveredColumns[col])
					{
						starMatrix[row + nOfRows*col] = true;
						coveredColumns[col] = true;
						break;
					}
	}
	else /* if(nOfRows > nOfColumns) */
	{
		minDim = nOfColumns;

		for (col = 0; col<nOfColumns; col++)
		{
			/* find the smallest element in the column */
			distMatrixTemp = distMatrix + nOfRows*col;
			columnEnd = distMatrixTemp + nOfRows;

			minValue = *distMatrixTemp++;
			while (distMatrixTemp < columnEnd)
			{
				value = *distMatrixTemp++;
				if (value < minValue)
					minValue = value;
			}

			/* subtract the smallest element from each element of the column */
			distMatrixTemp = distMatrix + nOfRows*col;
			while (distMatrixTemp < columnEnd)
				*distMatrixTemp++ -= minValue;
		}

		/* Steps 1 and 2a */
		for (col = 0; col<nOfColumns; col++)
			for (row = 0; row<nOfRows; row++)
				if (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON)
					if (!coveredRows[row])
					{
						starMatrix[row + nOfRows*col] = true;
						coveredColumns[col] = true;
						coveredRows[row] = true;
						break;
					}
		for (row = 0; row<nOfRows; row++)
			coveredRows[row] = false;

	}

	/* move to step 2b */
	step2b(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);

	/* compute cost and remove invalid assignments */
	computeassignmentcost(assignment, cost, distMatrixIn, nOfRows);

	/* free allocated memory */
	free(distMatrix);
	free(coveredColumns);
	free(coveredRows);
	free(starMatrix);
	free(primeMatrix);
	free(newStarMatrix);

	return;
}

/********************************************************/
void HungarianAlgorithm::buildassignmentvector(int *assignment, bool *starMatrix, int nOfRows, int nOfColumns)
{
	int row, col;

	for (row = 0; row<nOfRows; row++)
		for (col = 0; col<nOfColumns; col++)
			if (starMatrix[row + nOfRows*col])
			{
#ifdef ONE_INDEXING
				assignment[row] = col + 1; /* MATLAB-Indexing */
#else
				assignment[row] = col;
#endif
				break;
			}
}

/********************************************************/
void HungarianAlgorithm::computeassignmentcost(int *assignment, double *cost, double *distMatrix, int nOfRows)
{
	int row, col;

	for (row = 0; row<nOfRows; row++)
	{
		col = assignment[row];
		if (col >= 0)
			*cost += distMatrix[row + nOfRows*col];
	}
}

/********************************************************/
void HungarianAlgorithm::step2a(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)
{
	bool *starMatrixTemp, *columnEnd;
	int col;

	/* cover every column containing a starred zero */
	for (col = 0; col<nOfColumns; col++)
	{
		starMatrixTemp = starMatrix + nOfRows*col;
		columnEnd = starMatrixTemp + nOfRows;
		while (starMatrixTemp < columnEnd){
			if (*starMatrixTemp++)
			{
				coveredColumns[col] = true;
				break;
			}
		}
	}

	/* move to step 3 */
	step2b(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);
}

/********************************************************/
void HungarianAlgorithm::step2b(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)
{
	int col, nOfCoveredColumns;

	/* count covered columns */
	nOfCoveredColumns = 0;
	for (col = 0; col<nOfColumns; col++)
		if (coveredColumns[col])
			nOfCoveredColumns++;

	if (nOfCoveredColumns == minDim)
	{
		/* algorithm finished */
		buildassignmentvector(assignment, starMatrix, nOfRows, nOfColumns);
	}
	else
	{
		/* move to step 3 */
		step3(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);
	}

}

/********************************************************/
void HungarianAlgorithm::step3(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)
{
	bool zerosFound;
	int row, col, starCol;

	zerosFound = true;
	while (zerosFound)
	{
		zerosFound = false;
		for (col = 0; col<nOfColumns; col++)
			if (!coveredColumns[col])
				for (row = 0; row<nOfRows; row++)
					if ((!coveredRows[row]) && (fabs(distMatrix[row + nOfRows*col]) < DBL_EPSILON))
					{
						/* prime zero */
						primeMatrix[row + nOfRows*col] = true;

						/* find starred zero in current row */
						for (starCol = 0; starCol<nOfColumns; starCol++)
							if (starMatrix[row + nOfRows*starCol])
								break;

						if (starCol == nOfColumns) /* no starred zero found */
						{
							/* move to step 4 */
							step4(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim, row, col);
							return;
						}
						else
						{
							coveredRows[row] = true;
							coveredColumns[starCol] = false;
							zerosFound = true;
							break;
						}
					}
	}

	/* move to step 5 */
	step5(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);
}

/********************************************************/
void HungarianAlgorithm::step4(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim, int row, int col)
{
	int n, starRow, starCol, primeRow, primeCol;
	int nOfElements = nOfRows*nOfColumns;

	/* generate temporary copy of starMatrix */
	for (n = 0; n<nOfElements; n++)
		newStarMatrix[n] = starMatrix[n];

	/* star current zero */
	newStarMatrix[row + nOfRows*col] = true;

	/* find starred zero in current column */
	starCol = col;
	for (starRow = 0; starRow<nOfRows; starRow++)
		if (starMatrix[starRow + nOfRows*starCol])
			break;

	while (starRow<nOfRows)
	{
		/* unstar the starred zero */
		newStarMatrix[starRow + nOfRows*starCol] = false;

		/* find primed zero in current row */
		primeRow = starRow;
		for (primeCol = 0; primeCol<nOfColumns; primeCol++)
			if (primeMatrix[primeRow + nOfRows*primeCol])
				break;

		/* star the primed zero */
		newStarMatrix[primeRow + nOfRows*primeCol] = true;

		/* find starred zero in current column */
		starCol = primeCol;
		for (starRow = 0; starRow<nOfRows; starRow++)
			if (starMatrix[starRow + nOfRows*starCol])
				break;
	}

	/* use temporary copy as new starMatrix */
	/* delete all primes, uncover all rows */
	for (n = 0; n<nOfElements; n++)
	{
		primeMatrix[n] = false;
		starMatrix[n] = newStarMatrix[n];
	}
	for (n = 0; n<nOfRows; n++)
		coveredRows[n] = false;

	/* move to step 2a */
	step2a(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);
}

/********************************************************/
void HungarianAlgorithm::step5(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim)
{
	double h, value;
	int row, col;

	/* find smallest uncovered element h */
	h = DBL_MAX;
	for (row = 0; row<nOfRows; row++)
		if (!coveredRows[row])
			for (col = 0; col<nOfColumns; col++)
				if (!coveredColumns[col])
				{
					value = distMatrix[row + nOfRows*col];
					if (value < h)
						h = value;
				}

	/* add h to each covered row */
	for (row = 0; row<nOfRows; row++)
		if (coveredRows[row])
			for (col = 0; col<nOfColumns; col++)
				distMatrix[row + nOfRows*col] += h;

	/* subtract h from each uncovered column */
	for (col = 0; col<nOfColumns; col++)
		if (!coveredColumns[col])
			for (row = 0; row<nOfRows; row++)
				distMatrix[row + nOfRows*col] -= h;

	/* move to step 3 */
	step3(assignment, distMatrix, starMatrix, newStarMatrix, primeMatrix, coveredColumns, coveredRows, nOfRows, nOfColumns, minDim);
}


================================================
FILE: server/src/Hungarian.h
================================================
///////////////////////////////////////////////////////////////////////////////
// Hungarian.h: Header file for Class HungarianAlgorithm.
// 
// This is a C++ wrapper with slight modification of a hungarian algorithm implementation by Markus Buehren.
// The original implementation is a few mex-functions for use in MATLAB, found here:
// http://www.mathworks.com/matlabcentral/fileexchange/6543-functions-for-the-rectangular-assignment-problem
// 
// Both this code and the orignal code are published under the BSD license.
// by Cong Ma, 2016
// 

#include <iostream>
#include <vector>
#include <cstdlib>
#include <cfloat>
#include <cmath>

using namespace std;


class HungarianAlgorithm
{
public:
	HungarianAlgorithm();
	~HungarianAlgorithm();
	double Solve(vector<vector<double> >& DistMatrix, vector<int>& Assignment);

private:
	void assignmentoptimal(int *assignment, double *cost, double *distMatrix, int nOfRows, int nOfColumns);
	void buildassignmentvector(int *assignment, bool *starMatrix, int nOfRows, int nOfColumns);
	void computeassignmentcost(int *assignment, double *cost, double *distMatrix, int nOfRows);
	void step2a(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);
	void step2b(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);
	void step3(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);
	void step4(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim, int row, int col);
	void step5(int *assignment, double *distMatrix, bool *starMatrix, bool *newStarMatrix, bool *primeMatrix, bool *coveredColumns, bool *coveredRows, int nOfRows, int nOfColumns, int minDim);
};


================================================
FILE: server/src/KalmanTracker.cpp
================================================
///////////////////////////////////////////////////////////////////////////////
// KalmanTracker.cpp: KalmanTracker Class Implementation Declaration

#include "KalmanTracker.h"


int KalmanTracker::kf_count = 0;


// initialize Kalman filter
void KalmanTracker::init_kf(StateType stateMat)
{
	int stateNum = 7;
	int measureNum = 4;
	kf = KalmanFilter(stateNum, measureNum, 0);

	measurement = Mat::zeros(measureNum, 1, CV_32F);

	kf.transitionMatrix = (Mat_<float>(stateNum, stateNum) <<
		1, 0, 0, 0, 1, 0, 0,
		0, 1, 0, 0, 0, 1, 0,
		0, 0, 1, 0, 0, 0, 1,
		0, 0, 0, 1, 0, 0, 0,
		0, 0, 0, 0, 1, 0, 0,
		0, 0, 0, 0, 0, 1, 0,
		0, 0, 0, 0, 0, 0, 1);

	setIdentity(kf.measurementMatrix);
	setIdentity(kf.processNoiseCov, Scalar::all(1e-2));
	setIdentity(kf.measurementNoiseCov, Scalar::all(1e-1));
	setIdentity(kf.errorCovPost, Scalar::all(1));
	
	// initialize state vector with bounding box in [cx,cy,s,r] style
	kf.statePost.at<float>(0, 0) = stateMat.x + stateMat.width / 2;
	kf.statePost.at<float>(1, 0) = stateMat.y + stateMat.height / 2;
	kf.statePost.at<float>(2, 0) = stateMat.area();
	kf.statePost.at<float>(3, 0) = stateMat.width / stateMat.height;
}


// Predict the estimated bounding box.
StateType KalmanTracker::predict()
{
	// predict
	Mat p = kf.predict();
	m_age += 1;

	if (m_time_since_update > 0)
		m_hit_streak = 0;
	m_time_since_update += 1;

	StateType predictBox = get_rect_xysr(p.at<float>(0, 0), p.at<float>(1, 0), p.at<float>(2, 0), p.at<float>(3, 0));

	m_history.push_back(predictBox);
	return m_history.back();
}


// Update the state vector with observed bounding box.
void KalmanTracker::update(StateType stateMat)
{
	m_time_since_update = 0;
	m_history.clear();
	m_hits += 1;
	m_hit_streak += 1;

	// measurement
	measurement.at<float>(0, 0) = stateMat.x + stateMat.width / 2;
	measurement.at<float>(1, 0) = stateMat.y + stateMat.height / 2;
	measurement.at<float>(2, 0) = stateMat.area();
	measurement.at<float>(3, 0) = stateMat.width / stateMat.height;

	// update
	kf.correct(measurement);
}


// Return the current state vector
StateType KalmanTracker::get_state()
{
	Mat s = kf.statePost;
	return get_rect_xysr(s.at<float>(0, 0), s.at<float>(1, 0), s.at<float>(2, 0), s.at<float>(3, 0));
}


// Convert bounding box from [cx,cy,s,r] to [x,y,w,h] style.
StateType KalmanTracker::get_rect_xysr(float cx, float cy, float s, float r)
{
	float w = sqrt(s * r);
	float h = s / w;
	float x = (cx - w / 2);
	float y = (cy - h / 2);

	if (x < 0 && cx > 0)
		x = 0;
	if (y < 0 && cy > 0)
		y = 0;

	return StateType(x, y, w, h);
}



/*
// --------------------------------------------------------------------
// Kalman Filter Demonstrating, a 2-d ball demo
// --------------------------------------------------------------------

const int winHeight = 600;
const int winWidth = 800;

Point mousePosition = Point(winWidth >> 1, winHeight >> 1);

// mouse event callback
void mouseEvent(int event, int x, int y, int flags, void *param)
{
	if (event == CV_EVENT_MOUSEMOVE) {
		mousePosition = Point(x, y);
	}
}

void TestKF();

void main()
{
	TestKF();
}


void TestKF()
{
	int stateNum = 4;
	int measureNum = 2;
	KalmanFilter kf = KalmanFilter(stateNum, measureNum, 0);

	// initialization
	Mat processNoise(stateNum, 1, CV_32F);
	Mat measurement = Mat::zeros(measureNum, 1, CV_32F);

	kf.transitionMatrix = *(Mat_<float>(stateNum, stateNum) <<
		1, 0, 1, 0,
		0, 1, 0, 1,
		0, 0, 1, 0,
		0, 0, 0, 1);

	setIdentity(kf.measurementMatrix);
	setIdentity(kf.processNoiseCov, Scalar::all(1e-2));
	setIdentity(kf.measurementNoiseCov, Scalar::all(1e-1));
	setIdentity(kf.errorCovPost, Scalar::all(1));

	randn(kf.statePost, Scalar::all(0), Scalar::all(winHeight));

	namedWindow("Kalman");
	setMouseCallback("Kalman", mouseEvent);
	Mat img(winHeight, winWidth, CV_8UC3);

	while (1)
	{
		// predict
		Mat prediction = kf.predict();
		Point predictPt = Point(prediction.at<float>(0, 0), prediction.at<float>(1, 0));

		// generate measurement
		Point statePt = mousePosition;
		measurement.at<float>(0, 0) = statePt.x;
		measurement.at<float>(1, 0) = statePt.y;

		// update
		kf.correct(measurement);

		// visualization
		img.setTo(Scalar(255, 255, 255));
		circle(img, predictPt, 8, CV_RGB(0, 255, 0), -1); // predicted point as green
		circle(img, statePt, 8, CV_RGB(255, 0, 0), -1); // current position as red

		imshow("Kalman", img);
		char code = (char)waitKey(100);
		if (code == 27 || code == 'q' || code == 'Q')
			break;
	}
	destroyWindow("Kalman");
}
*/


================================================
FILE: server/src/KalmanTracker.h
================================================
///////////////////////////////////////////////////////////////////////////////
// KalmanTracker.h: KalmanTracker Class Declaration

#ifndef KALMAN_H
#define KALMAN_H 2

#include "opencv2/video/tracking.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "people.hpp"

using namespace std;
using namespace cv;

#define StateType Rect_<float>


// This class represents the internel state of individual tracked objects observed as bounding box.
class KalmanTracker
{
public:
	KalmanTracker()
	{
		init_kf(StateType());
		m_time_since_update = 0;
		m_hits = 0;
		m_hit_streak = 0;
		m_age = 0;
		m_id = kf_count;
		//kf_count++;
	}
	KalmanTracker(StateType initRect, Person *p = NULL)
	{
		init_kf(initRect);
		m_time_since_update = 0;
		m_hits = 0;
		m_hit_streak = 0;
		m_age = 0;
		m_id = kf_count;
    m_p = p;
		kf_count++;
	}

	~KalmanTracker()
	{
		m_history.clear();
	}

	StateType predict();
	void update(StateType stateMat);
	
	StateType get_state();
	StateType get_rect_xysr(float cx, float cy, float s, float r);

	static int kf_count;

	int m_time_since_update;
	int m_hits;
	int m_hit_streak;
	int m_age;
	int m_id;
  Person *m_p;

private:
	void init_kf(StateType stateMat);

	cv::KalmanFilter kf;
	cv::Mat measurement;

	std::vector<StateType> m_history;
};




#endif


================================================
FILE: server/src/Tracker.cpp
================================================
#include <vector>
#include <set>
#include <iterator>
#include <iostream>
#include "Hungarian.h"
#include "Tracker.hpp"

using namespace std;


Tracker::Tracker() {
  frame_count = 0;
  max_age = 1;
  min_hits = 3;
  iouThreshold = 0.3;
  trkNum = detNum = 0;
  KalmanTracker::kf_count = 0; // tracking id relies on this, so we have to reset it in each seq.
}

// Computes IOU between two bounding boxes
double Tracker::GetIOU(Rect_<float> bb_test, Rect_<float> bb_gt)
{
  float in = (bb_test & bb_gt).area();
  float un = bb_test.area() + bb_gt.area() - in;

  if (un < DBL_EPSILON)
    return 0;

  return (double)(in / un);
}

vector<TrackingBox> Tracker::init(vector<TrackingBox> &detections) {
  frame_count = 1;
  // initialize kalman trackers using first detections.
  for (unsigned int i = 0; i < detections.size(); i++)
  {
    KalmanTracker trk = KalmanTracker(detections[i].box, detections[i].p);
    trackers.push_back(trk);
  }

  // output the first frame detections
  for (unsigned int id = 0; id < detections.size(); id++)
  {
    TrackingBox tb = detections[id];
    tb.id = id + 1;
    tb.p->set_id(tb.id);
    frameTrackingResult.push_back(tb);
    // 
  }
  return frameTrackingResult;
}

vector<TrackingBox> Tracker::update(vector<TrackingBox> &detections) {
  frame_count++;
  // 3.1. get predicted locations from existing trackers.
  predictedBoxes.clear();

  for (auto it = trackers.begin(); it != trackers.end();)
  {
    Rect_<float> pBox = (*it).predict();
    if (pBox.x >= 0 && pBox.y >= 0)
    {
      predictedBoxes.push_back(pBox);
      it++;
    }
    else
    {
      it = trackers.erase(it);
      //cerr << "Box invalid at frame: " << frame_count << endl;
    }
  }

  ///////////////////////////////////////
  // 3.2. associate detections to tracked object (both represented as bounding boxes)
  // dets : detFrameData[fi]
  trkNum = predictedBoxes.size();
  detNum = detections.size();

  iouMatrix.clear();
  iouMatrix.resize(trkNum, vector<double>(detNum, 0));

  for (unsigned int i = 0; i < trkNum; i++) // compute iou matrix as a distance matrix
  {
    for (unsigned int j = 0; j < detNum; j++)
    {
      // use 1-iou because the hungarian algorithm computes a minimum-cost assignment.
      iouMatrix[i][j] = 1 - GetIOU(predictedBoxes[i], detections[j].box);
    }
  }

  // solve the assignment problem using hungarian algorithm.
  // the resulting assignment is [track(prediction) : detection], with len=preNum
  HungarianAlgorithm HungAlgo;
  assignment.clear();
  HungAlgo.Solve(iouMatrix, assignment);

  // find matches, unmatched_detections and unmatched_predictions
  unmatchedTrajectories.clear();
  unmatchedDetections.clear();
  allItems.clear();
  matchedItems.clear();

  if (detNum > trkNum) //	there are unmatched detections
  {
    for (unsigned int n = 0; n < detNum; n++)
      allItems.insert(n);

    for (unsigned int i = 0; i < trkNum; ++i)
      matchedItems.insert(assignment[i]);

    set_difference(allItems.begin(), allItems.end(),
      matchedItems.begin(), matchedItems.end(),
      insert_iterator<set<int>>(unmatchedDetections, unmatchedDetections.begin()));
  }
  else if (detNum < trkNum) // there are unmatched trajectory/predictions
  {
    for (unsigned int i = 0; i < trkNum; ++i)
      if (assignment[i] == -1) // unassigned label will be set as -1 in the assignment algorithm
        unmatchedTrajectories.insert(i);
  }
  else
    ;

  // filter out matched with low IOU
  matchedPairs.clear();
  for (unsigned int i = 0; i < trkNum; ++i)
  {
    if (assignment[i] == -1) // pass over invalid values
      continue;
    if (1 - iouMatrix[i][assignment[i]] < iouThreshold)
    {
      unmatchedTrajectories.insert(i);
      unmatchedDetections.insert(assignment[i]);
    }
    else
      matchedPairs.push_back(cv::Point(i, assignment[i]));
  }

  ///////////////////////////////////////
  // 3.3. updating trackers

  // update matched trackers with assigned detections.
  // each prediction is corresponding to a tracker
  int detIdx, trkIdx;
  for (unsigned int i = 0; i < matchedPairs.size(); i++)
  {
    trkIdx = matchedPairs[i].x;
    detIdx = matchedPairs[i].y;
    trackers[trkIdx].update(detections[detIdx].box);
    trackers[trkIdx].m_p->update(detections[detIdx].p);
  }

  // create and initialise new trackers for unmatched detections
  for (auto umd : unmatchedDetections)
  {
    KalmanTracker tracker = KalmanTracker(detections[umd].box, detections[umd].p);
    trackers.push_back(tracker);
  }

  // get trackers' output
  frameTrackingResult.clear();
  for (auto it = trackers.begin(); it != trackers.end();)
  {
    if (((*it).m_time_since_update < 1) &&
      ((*it).m_hit_streak >= min_hits || frame_count <= min_hits))
    {
      TrackingBox res;
      res.box = (*it).get_state();
      res.id = (*it).m_id + 1;
      res.frame = frame_count;

      // person info update
      res.p = (*it).m_p;
      res.p->set_id(res.id);
      res.p->set_rect(res.box);

      frameTrackingResult.push_back(res);
      it++;
    }
    else
      it++;

    // remove dead tracklet
    if (it != trackers.end() && (*it).m_time_since_update > max_age) {
      // delete person object
      delete (*it).m_p;
      it = trackers.erase(it);
    }
  }

  return frameTrackingResult;
}	// update method end

Tracker::~Tracker() {

}


================================================
FILE: server/src/Tracker.hpp
================================================
#pragma once
#include "KalmanTracker.h"
#include <set>
#include "people.hpp"

typedef struct TrackingBox
{
  int frame;
  int id;
  Rect_<float> box;
  Person* p;
}TrackingBox;

class Tracker
{

private:
  int frame_count;
  int max_age;
  int min_hits;
  double iouThreshold;

  unsigned int trkNum;
  unsigned int detNum;

  vector<KalmanTracker> trackers;

  // variables used in the for-loop
  vector<Rect_<float> > predictedBoxes;
  vector<vector<double> > iouMatrix;
  vector<int> assignment;
  set<int> unmatchedDetections;
  set<int> unmatchedTrajectories;
  set<int> allItems;
  set<int> matchedItems;
  vector<cv::Point> matchedPairs;
  vector<TrackingBox> frameTrackingResult;

public:
  Tracker();
  // Computes IOU between two bounding boxes
  double GetIOU(Rect_<float> bb_test, Rect_<float> bb_gt);
  vector<TrackingBox> init(vector<TrackingBox> &detections);
  vector<TrackingBox> update(vector<TrackingBox> &detections);
  ~Tracker();
};


================================================
FILE: server/src/args.cpp
================================================
// https://github.com/pjreddie/template

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "args.hpp"

void del_arg(int argc, char **argv, int index)
{
  int i;
  for (i = index; i < argc - 1; ++i) argv[i] = argv[i + 1];
  argv[i] = 0;
}

int find_arg(int argc, char* argv[], const char *arg)
{
  int i;
  for (i = 0; i < argc; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      del_arg(argc, argv, i);
      return 1;
    }
  }
  return 0;
}

int find_int_arg(int argc, char **argv, const char *arg, int def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = atoi(argv[i + 1]);
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}

float find_float_arg(int argc, char **argv, const char *arg, float def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = atof(argv[i + 1]);
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}

const char *find_char_arg(int argc, char **argv, const char *arg, const char *def)
{
  int i;
  for (i = 0; i < argc - 1; ++i) {
    if (!argv[i]) continue;
    if (0 == strcmp(argv[i], arg)) {
      def = argv[i + 1];
      del_arg(argc, argv, i);
      del_arg(argc, argv, i);
      break;
    }
  }
  return def;
}


================================================
FILE: server/src/args.hpp
================================================
// https://github.com/pjreddie/template

#ifndef ARGS_H
#define ARGS_H

int find_arg(int argc, char* argv[], const char *arg);
int find_int_arg(int argc, char **argv, const char *arg, int def);
float find_float_arg(int argc, char **argv, const char *arg, float def);
const char *find_char_arg(int argc, char **argv, const char *arg, const char *def);

#endif

================================================
FILE: server/src/base64.cpp
================================================
/* 
   base64.cpp and base64.h

   base64 encoding and decoding with C++.

   Version: 1.01.00

   Copyright (C) 2004-2017 René Nyffenegger

   This source code is provided 'as-is', without any express or implied
   warranty. In no event will the author be held liable for any damages
   arising from the use of this software.

   Permission is granted to anyone to use this software for any purpose,
   including commercial applications, and to alter it and redistribute it
   freely, subject to the following restrictions:

   1. The origin of this source code must not be misrepresented; you must not
      claim that you wrote the original source code. If you use this source code
      in a product, an acknowledgment in the product documentation would be
      appreciated but is not required.

   2. Altered source versions must be plainly marked as such, and must not be
      misrepresented as being the original source code.

   3. This notice may not be removed or altered from any source distribution.

   René Nyffenegger rene.nyffenegger@adp-gmbh.ch

*/

#include "base64.h"
#include <iostream>

static const std::string base64_chars = 
             "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
             "abcdefghijklmnopqrstuvwxyz"
             "0123456789+/";


static inline bool is_base64(unsigned char c) {
  return (isalnum(c) || (c == '+') || (c == '/'));
}

std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {
  std::string ret;
  int i = 0;
  int j = 0;
  unsigned char char_array_3[3];
  unsigned char char_array_4[4];

  while (in_len--) {
    char_array_3[i++] = *(bytes_to_encode++);
    if (i == 3) {
      char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
      char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
      char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
      char_array_4[3] = char_array_3[2] & 0x3f;

      for(i = 0; (i <4) ; i++)
        ret += base64_chars[char_array_4[i]];
      i = 0;
    }
  }

  if (i)
  {
    for(j = i; j < 3; j++)
      char_array_3[j] = '\0';

    char_array_4[0] = ( char_array_3[0] & 0xfc) >> 2;
    char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
    char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);

    for (j = 0; (j < i + 1); j++)
      ret += base64_chars[char_array_4[j]];

    while((i++ < 3))
      ret += '=';

  }

  return ret;

}

std::string base64_decode(std::string const& encoded_string) {
  size_t in_len = encoded_string.size();
  int i = 0;
  int j = 0;
  int in_ = 0;
  unsigned char char_array_4[4], char_array_3[3];
  std::string ret;

  while (in_len-- && ( encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {
    char_array_4[i++] = encoded_string[in_]; in_++;
    if (i ==4) {
      for (i = 0; i <4; i++)
        char_array_4[i] = base64_chars.find(char_array_4[i]) & 0xff;

      char_array_3[0] = ( char_array_4[0] << 2       ) + ((char_array_4[1] & 0x30) >> 4);
      char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
      char_array_3[2] = ((char_array_4[2] & 0x3) << 6) +   char_array_4[3];

      for (i = 0; (i < 3); i++)
        ret += char_array_3[i];
      i = 0;
    }
  }

  if (i) {
    for (j = 0; j < i; j++)
      char_array_4[j] = base64_chars.find(char_array_4[j]) & 0xff;

    char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
    char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);

    for (j = 0; (j < i - 1); j++) ret += char_array_3[j];
  }

  return ret;
}


================================================
FILE: server/src/base64.h
================================================
//
//  base64 encoding and decoding with C++.
//  Version: 1.01.00
//

#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A
#define BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A

#include <string>

std::string base64_encode(unsigned char const* , unsigned int len);
std::string base64_decode(std::string const& s);

#endif /* BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E704A */


================================================
FILE: server/src/frame.cpp
================================================
#include <string>
#include <sstream>
#include <cstring>
#include "frame.hpp"
#include "json.h"
#include "base64.h"

using namespace std;

Frame_pool::Frame_pool()
{
  mem_pool_msg = new CMemPool(MEM_POOL_UNIT_NUM, SEQ_BUF_LEN);
  mem_pool_seq = new CMemPool(MEM_POOL_UNIT_NUM, MSG_BUF_LEN);
  mem_pool_det = new CMemPool(MEM_POOL_UNIT_NUM, DET_BUF_LEN);
};

Frame_pool::Frame_pool(int unit_num)
{
  mem_pool_msg = new CMemPool(unit_num, SEQ_BUF_LEN);
  mem_pool_seq = new CMemPool(unit_num, MSG_BUF_LEN);
  mem_pool_det = new CMemPool(unit_num, DET_BUF_LEN);
};

Frame_pool::~Frame_pool()
{

};

Frame Frame_pool::alloc_frame(void) {
  Frame frame;
  frame_init(frame);
  return frame;
};

void Frame_pool::free_frame(Frame& frame) {
  mem_pool_seq->Free((void *)frame.seq_buf);
  mem_pool_msg->Free((void *)frame.msg_buf);
  mem_pool_det->Free((void *)frame.det_buf);
}

void Frame_pool::frame_init(Frame& frame) {
  frame.seq_len = frame.msg_len = frame.det_len = 0;
  frame.seq_buf = (unsigned char *)(mem_pool_seq->Alloc(SEQ_BUF_LEN, true));
  frame.msg_buf = (unsigned char *)(mem_pool_msg->Alloc(MSG_BUF_LEN, true));
  frame.det_buf = (unsigned char *)(mem_pool_det->Alloc(DET_BUF_LEN, true));
};



int frame_to_json(void* buf, const Frame& frame) {
	stringstream ss;
	ss << "{\n\"seq\":\"" << base64_encode((unsigned char *)frame.seq_buf, frame.seq_len) << "\",\n"
		<< "\"msg\": \"" << base64_encode((unsigned char*)(frame.msg_buf), frame.msg_len) << "\",\n" 
		<< "\"det\": \"" << base64_encode((unsigned char*)(frame.det_buf), frame.det_len) 
    << "\"\n}";
    

	memcpy(buf, ss.str().c_str(), ss.str().size());
	((unsigned char*)buf)[ss.str().size()] = '\0';
	return ss.str().size();
};

void json_to_frame(void* buf, Frame& frame) {
	json_object *raw_obj;
	raw_obj = json_tokener_parse((const char*)buf);

	json_object *seq_obj = json_object_object_get(raw_obj, "seq");
	json_object *msg_obj = json_object_object_get(raw_obj, "msg");
	json_object *det_obj = json_object_object_get(raw_obj, "det");

	string seq(base64_decode(json_object_get_string(seq_obj)));
	string msg(base64_decode(json_object_get_string(msg_obj)));
	string det(base64_decode(json_object_get_string(det_obj)));

	frame.seq_len = seq.size();
	frame.msg_len = msg.size();
	frame.det_len = det.size();

	memcpy(frame.seq_buf, seq.c_str(), frame.seq_len);
	((unsigned char*)frame.seq_buf)[frame.seq_len] = '\0';

	memcpy(frame.msg_buf, msg.c_str(), frame.msg_len);
	((unsigned char*)frame.msg_buf)[frame.msg_len] = '\0';

  if (frame.det_len > 0) {
    memcpy(frame.det_buf, det.c_str(), frame.det_len);
    ((unsigned char*)frame.det_buf)[frame.det_len] = '\0';
  }

  // free
  json_object_put(seq_obj);
  json_object_put(msg_obj);
  json_object_put(det_obj);
};


================================================
FILE: server/src/frame.hpp
================================================
#ifndef __FRAME_H
#define __FRAME_H

#include "mem_pool.hpp"

struct Frame {
  int seq_len;
  int msg_len;
  int det_len;
  unsigned char *seq_buf;
  unsigned char *msg_buf;
  unsigned char *det_buf;
};

const int SEQ_BUF_LEN = 100;
const int MSG_BUF_LEN = 76800;
const int DET_BUF_LEN = 25600;
const int JSON_BUF_LEN = MSG_BUF_LEN * 2;
class Frame_pool
{
private:
  CMemPool *mem_pool_msg;
  CMemPool *mem_pool_seq;
  CMemPool *mem_pool_det;
  const int MEM_POOL_UNIT_NUM = 5000;

public:
  Frame_pool();
  Frame_pool(int unit_num);
  Frame alloc_frame(void);
  void free_frame(Frame& frame);
  void frame_init(Frame& frame);
  ~Frame_pool();
};

int frame_to_json(void* buf, const Frame& frame);
void json_to_frame(void* buf, Frame& frame);

#endif


================================================
FILE: server/src/mem_pool.cpp
================================================
#include <stdio.h>
#include <stdlib.h>
#include "mem_pool.hpp"

/*==========================================================
CMemPool:
    Constructor of this class. It allocate memory block from system and create
    a static double linked list to manage all memory unit.

Parameters:
    [in]ulUnitNum
    The number of unit which is a part of memory block.

    [in]ulUnitSize
    The size of unit.
//=========================================================
*/
CMemPool::CMemPool(unsigned long ulUnitNum,unsigned long ulUnitSize) :
    m_pMemBlock(NULL), m_pAllocatedMemBlock(NULL), m_pFreeMemBlock(NULL), 
    m_ulBlockSize(ulUnitNum * (ulUnitSize+sizeof(struct _Unit))), 
    m_ulUnitSize(ulUnitSize)
{    
    m_pMemBlock = malloc(m_ulBlockSize);     //Allocate a memory block.
    
    if(NULL != m_pMemBlock)
    {
        for(unsigned long i=0; i<ulUnitNum; i++)  //Link all mem unit . Create linked list.
        {
            struct _Unit *pCurUnit = (struct _Unit *)( (char *)m_pMemBlock + i*(ulUnitSize+sizeof(struct _Unit)) );
            
            pCurUnit->pPrev = NULL;
            pCurUnit->pNext = m_pFreeMemBlock;    //Insert the new unit at head.
            
            if(NULL != m_pFreeMemBlock)
            {
                m_pFreeMemBlock->pPrev = pCurUnit;
            }
            m_pFreeMemBlock = pCurUnit;
        }
    }    
} 

/*===============================================================
~CMemPool():
    Destructor of this class. Its task is to free memory block.
//===============================================================
*/
CMemPool::~CMemPool()
{
    free(m_pMemBlock);
}

/*================================================================
Alloc:
    To allocate a memory unit. If memory pool can`t provide proper memory unit,
    It will call system function.

Parameters:
    [in]ulSize
    Memory unit size.

    [in]bUseMemPool
    Whether use memory pool.

Return Values:
    Return a pointer to a memory unit.
//=================================================================
*/
void* CMemPool::Alloc(unsigned long ulSize, bool bUseMemPool)
{
    if(    ulSize > m_ulUnitSize || false == bUseMemPool || 
        NULL == m_pMemBlock   || NULL == m_pFreeMemBlock)
    {
        return malloc(ulSize);
    }

    //Now FreeList isn`t empty
    struct _Unit *pCurUnit = m_pFreeMemBlock;
    m_pFreeMemBlock = pCurUnit->pNext;            //Get a unit from free linkedlist.
    if(NULL != m_pFreeMemBlock)
    {
        m_pFreeMemBlock->pPrev = NULL;
    }

    pCurUnit->pNext = m_pAllocatedMemBlock;
    
    if(NULL != m_pAllocatedMemBlock)
    {
        m_pAllocatedMemBlock->pPrev = pCurUnit; 
    }
    m_pAllocatedMemBlock = pCurUnit;

    return (void *)((char *)pCurUnit + sizeof(struct _Unit) );
}

/*================================================================
Free:
    To free a memory unit. If the pointer of parameter point to a memory unit,
    then insert it to "Free linked list". Otherwise, call system function "free".

Parameters:
    [in]p
    It point to a memory unit and prepare to free it.

Return Values:
    none
//================================================================
*/
void CMemPool::Free( void* p )
{
    if(m_pMemBlock<p && p<(void *)((char *)m_pMemBlock + m_ulBlockSize) )
    {
        struct _Unit *pCurUnit = (struct _Unit *)((char *)p - sizeof(struct _Unit) );

        m_pAllocatedMemBlock = pCurUnit->pNext;
        if(NULL != m_pAllocatedMemBlock)
        {
            m_pAllocatedMemBlock->pPrev = NULL;
        }

        pCurUnit->pNext = m_pFreeMemBlock;
        if(NULL != m_pFreeMemBlock)
        {
             m_pFreeMemBlock->pPrev = pCurUnit;
        }

        m_pFreeMemBlock = pCurUnit;
    }
    else
    {
        free(p);
    }
}


================================================
FILE: server/src/mem_pool.hpp
================================================
#ifndef __MEMPOOL_H__
#define __MEMPOOL_H__
// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it
class CMemPool
{
private:
    //The purpose of the structure`s definition is that we can operate linkedlist conveniently
    struct _Unit                     //The type of the node of linkedlist.
    {
        struct _Unit *pPrev, *pNext;
    };

    void* m_pMemBlock;                //The address of memory pool.

    //Manage all unit with two linkedlist.
    struct _Unit*    m_pAllocatedMemBlock; //Head pointer to Allocated linkedlist.
    struct _Unit*    m_pFreeMemBlock;      //Head pointer to Free linkedlist.

    unsigned long    m_ulUnitSize; //Memory unit size. There are much unit in memory pool.
    unsigned long    m_ulBlockSize;//Memory pool size. Memory pool is make of memory unit.

public:
    CMemPool(unsigned long lUnitNum = 50, unsigned long lUnitSize = 1024);
    ~CMemPool();
    
    void* Alloc(unsigned long ulSize, bool bUseMemPool = true); //Allocate memory unit
    void Free( void* p );                                   //Free memory unit
};

#endif //__MEMPOOL_H__


================================================
FILE: server/src/people.cpp
================================================
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
#include <boost/serialization/vector.hpp>
#include <opencv2/opencv.hpp>
#include <cfloat>
#include <iostream>
#include <cstring>
#include <cmath>
#include <string>
#include <sstream>
#include <deque>
#include <queue>
#include "people.hpp"

void Person::set_part(int part, float x, float y) {
  history.front().x[part] = x;
  history.front().y[part] = y;

  // bounding box update
  if (x < min_x)
    min_x = x;
  else if (x > max_x)
    max_x = x;

  if (y < min_y)
    min_y = y;
  else if (y > max_y)
    max_y = y;
}

void Person::set_action(int type) {
  if (type < 0)
    type = ACTION_TYPE_NUM;
  action = type;

  if (actions.size() == ACTION_HIS_NUM)
    actions.pop_front();
  actions.push_back(type);
}
  
float Person::get_dist(float x1, float y1, float x2, float y2) {
  if (x1 == 0 || x2 == 0)
    return 0.0;
  else {
    return sqrt((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1));
  }
}

float Person::get_deg(float x1, float y1, float x2, float y2) {
  if (x1 == 0 || x2 == 0)
    return 0.0;
  double dx = x2 - x1;
  double dy = y2 - y1;
  double rad = atan2(dy, dx);
  double degree = (rad * 180) / M_PI;
  if (degree < 0)
    degree += 360;
  return degree;
}

bool Person::has_output(void) {
  if (overlap_count <= 0 && history.size() == HIS_NUM) {
    overlap_count = OVERLAP_NUM;
    return true;
  }
  else 
    return false;
}

void Person::update(Person* n_p) 
{
  static const int change_part[] = {
    RELBOW, RWRIST, LELBOW, LWRIST, RKNEE, RANKLE, LKNEE, LANKLE
  };
  static const int change_pair[] = {
    RSHOULDER, RELBOW, 
    RELBOW, RWRIST, 
    LSHOULDER, LELBOW, 
    LELBOW, LWRIST, 
    RHIP, RKNEE, 
    RKNEE, RANKLE, 
    LHIP, LKNEE, 
    LKNEE, LANKLE
  };

  Change c;
  double deg, n_deg;
  int part, pair_1, pair_2;
  const Joint& j = history.back();
  const Joint& n_j = n_p->history.front();

  // change calc
  for (int i = 0; i < CHANGE_NUM; i++) {
    part = change_part[i];
    c.dist[i] = abs(get_dist(j.x[part], n_j.x[part], j.y[part], n_j.y[part]));

    pair_1 = change_pair[i * 2];
    pair_2 = change_pair[i * 2 + 1];

    deg = get_deg(j.x[pair_1], j.y[pair_1], j.x[pair_2], j.y[pair_2]);
    n_deg = get_deg(n_j.x[pair_1], n_j.y[pair_1], n_j.x[pair_2], n_j.y[pair_2]);
    c.cur_deg[i] = n_deg;
    if (deg == 0 || n_deg == 0)
      c.deg[i] = 0.0;
    else 
      c.deg[i] = abs(deg - n_deg);
  }

  if (history.size() == HIS_NUM) {
    history.pop_front();
    change_history.pop_front();
  }
  history.push_back(n_p->history.front());
  change_history.push_back(c);
  overlap_count--;

  // delete
  delete n_p;
}

/*
std::string get_history(void) const 
{
  std::stringstream ss;
  for (int i = 0; i < history.size(); i++) {
    if (i != 0)
      ss << '\n';
    for (int j = 0; j < JOINT_NUM; j++) {
      if (j != 0)
        ss << ',';
      ss << history[i].x[j] << ',' << history[i].y[j];
    }
  }
  for (int i = history.size(); i < HIS_NUM; i++) {
    if (i != 0)
      ss << '\n';
    for (int j = 0; j < JOINT_NUM; j++) {
      if (j != 0)
        ss << ',';
      ss << 0.0 << ',' << 0.0;
    }
  }
  return ss.str();
}
*/

bool Person::check_crash(const Person& other) const
{
  const static int punch_check_joint[] = {RELBOW, RWRIST, LELBOW, LWRIST};
  const static int kick_check_joint[] = {RKNEE, RANKLE, LKNEE, LANKLE};
  
  const int* check_joint = nullptr;
  int my_action = this->get_action();

  if (my_action == PUNCH)
    check_joint = punch_check_joint; 
  else if (my_action == KICK)
    check_joint = kick_check_joint;
  else
    return false;

  cv::Rect_<float> other_rect = other.get_rect();

  const Joint& j = history.back();
  float x, y;
  for (int i = 0; i < 4; i++) {
    x = j.x[check_joint[i]];
    y = j.y[check_joint[i]];

    if (other_rect.x <= x && x <= other_rect.x + other_rect.width &&
        other_rect.y <= y && y <= other_rect.y + other_rect.height)
      return true;
  }
  return false;
}

cv::Rect_<float> Person::get_crash_rect(const Person& p) const
{
  cv::Rect_<float> a_rect = this->get_rect();
  cv::Rect_<float> b_rect = p.get_rect();

  float min_x, min_y, max_x, max_y;

  min_x = a_rect.x < b_rect.x ? a_rect.x : b_rect.x;
  min_y = a_rect.y < b_rect.y ? a_rect.y : b_rect.y;
  max_x = (a_rect.x + a_rect.width) > (b_rect.x + b_rect.width) ? (a_rect.x + a_rect.width) : (b_rect.x +
      b_rect.width);
  max_y = (a_rect.y + a_rect.height) > (b_rect.y + b_rect.height) ? (a_rect.y + a_rect.height) : (b_rect.y +
      b_rect.height);

  return cv::Rect_<float>(cv::Point_<float>(min_x, min_y),
      cv::Point_<float>(max_x, max_y));
}

std::string Person::get_history(void) const 
{
  std::stringstream ss;
  for (size_t i = 0; i < change_history.size(); i++) {
    if (i != 0)
      ss << '\n';
    for (int j = 0; j < CHANGE_NUM; j++) {
      if (j != 0)
        ss << ',';
      ss << change_history[i].dist[j] << ',' << change_history[i].deg[j] << ',' << change_history[i].cur_deg[j];
    }
  }
  for (int i = change_history.size(); i < HIS_NUM - 1; i++) {
    if (i != 0)
      ss << '\n';
    for (int j = 0; j < CHANGE_NUM; j++) {
      if (j != 0)
        ss << ',';
      ss << 0.0 << ',' << 0.0 << ',' << 0.0;
    }
  }
  return ss.str();
}

cv::Rect_<float> Person::get_rect(void) const
{
  return cv::Rect_<float>(cv::Point_<float>(min_x, min_y),
      cv::Point_<float>(max_x, max_y));
}

void Person::set_rect(cv::Rect_<float>& rect) 
{
  min_x = rect.x;
  min_y = rect.y;
  max_x = rect.x + rect.width;
  max_y = rect.y + rect.height;
}

Person& Person::operator=(const Person& p)
{
  track_id = p.track_id;
  max_x = p.max_x;
  max_y = p.max_y;
  min_x = p.min_x;
  min_y = p.min_y;
  history = p.history;
  change_history = p.change_history;
  overlap_count = p.overlap_count;
  return *this;
}

std::ostream& operator<<(std::ostream &out, const Person &p)
{
  for (size_t i = 0; i < p.change_history.size(); i++) {
    for (int j = 0; j < p.CHANGE_NUM; j++) {
      if (j != 0)
        out << ',';
      out << p.change_history[i].dist[j] << ',' <<  p.change_history[i].deg[j] << ',' << p.change_history[i].cur_deg[j];
    }
    out << '\n';
  }
  return out;
}


std::vector<Person*> People::to_person(void) {
  std::vector<Person*> persons;
  int person_num = keyshape[0];
  int part_num = keyshape[1];
  for (int person = 0; person < person_num; person++) {
    Person *p = new Person();
    for (int part = 0; part < part_num; part++) {
      int index = (person * part_num + part) * keyshape[2];
      if (keypoints[index + 2] >  thresh) {
        p->set_part(part, keypoints[index] * scale, keypoints[index + 1] * scale);
      }
    }
    persons.push_back(p);
  }
  return persons;
}

std::string People::get_output(void) {
	std::string out_str = "\"people\": [\n";
	int person_num = keyshape[0];
	int part_num = keyshape[1];
	for (int person = 0; person < person_num; person++) {
		if (person != 0)
			out_str += ",\n";
		out_str += " {\n";
		for (int part = 0; part < part_num; part++) {
			if (part != 0) 
				out_str += ",\n ";
			int index = (person * part_num + part) * keyshape[2];
			char *buf = (char*)calloc(2048, sizeof(char));

			if (keypoints[index + 2] >  thresh) {
				sprintf(buf, " \"%d\":[%f, %f]", part, keypoints[index] * scale, keypoints[index + 1] * scale);
			}
			else {
				sprintf(buf, " \"%d\":[%f, %f]", part, 0.0, 0.0);
			}
			out_str += buf;
			free(buf);
		}
		out_str += "\n }";
	}
	out_str += "\n ]";
	return out_str;
}

void People::render_pose_keypoints(cv::Mat& frame)
{
  const int num_keypoints = keyshape[1];
  unsigned int pairs[] =
  {
    1, 2, 1, 5, 2, 3, 3, 4, 5, 6, 6, 7, 1, 8, 8, 9, 9, 10,
    1, 11, 11, 12, 12, 13, 1, 0, 0, 14, 14, 16, 0, 15, 15, 17
  };
  float colors[] =
  {
    255.f, 0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 85.f, 0.f, 255.f, 170.f, 0.f,
    255.f, 255.f, 0.f, 170.f, 255.f, 0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 0.f,
    0.f, 255.f, 85.f, 0.f, 255.f, 170.f, 0.f, 255.f, 255.f, 0.f, 170.f, 255.f,
    0.f, 85.f, 255.f, 0.f, 0.f, 255.f, 255.f, 0.f, 170.f, 170.f, 0.f, 255.f,
    255.f, 0.f, 255.f, 85.f, 0.f, 255.f
  };
  const int pairs_size = sizeof(pairs) / sizeof(unsigned int);
  const int number_colors = sizeof(colors) / sizeof(float);

  for (int person = 0; person < keyshape[0]; ++person)
  {
    // Draw lines
    for (int pair = 0u; pair < pairs_size; pair += 2)
    {
      const int index1 = (person * num_keypoints + pairs[pair]) * keyshape[2];
      const int index2 = (person * num_keypoints + pairs[pair + 1]) * keyshape[2];
      if (keypoints[index1 + 2] > thresh && keypoints[index2 + 2] > thresh)
      {
        const int color_index = pairs[pair + 1] * 3;
        cv::Scalar color { colors[(color_index + 2) % number_colors],
          colors[(color_index + 1) % number_colors],
          colors[(color_index + 0) % number_colors]};
        cv::Point keypoint1{ intRoundUp(keypoints[index1] * scale), intRoundUp(keypoints[index1 + 1] * scale) };
        cv::Point keypoint2{ intRoundUp(keypoints[index2] * scale), intRoundUp(keypoints[index2 + 1] * scale) };
        cv::line(frame, keypoint1, keypoint2, color, 2);
      }
    }
    // Draw circles
    for (int part = 0; part < num_keypoints; ++part)
    {
      const int index = (person * num_keypoints + part) * keyshape[2];
      if (keypoints[index + 2] > thresh)
      {
        const int color_index = part * 3;
        cv::Scalar color { colors[(color_index + 2) % number_colors],
          colors[(color_index + 1) % number_colors],
          colors[(color_index + 0) % number_colors]};
        cv::Point center{ intRoundUp(keypoints[index] * scale), intRoundUp(keypoints[index + 1] * scale) };
        cv::circle(frame, center, 3, color, -1);
      }
    }
  }
}


================================================
FILE: server/src/people.hpp
================================================
#ifndef __PEOPLE
#define __PEOPLE
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
#include <boost/serialization/vector.hpp>
#include <opencv2/opencv.hpp>
#include <cfloat>
#include <iostream>
#include <cstring>
#include <cmath>
#include <string>
#include <sstream>
#include <deque>
#include <queue>

template<typename T>
inline int intRoundUp(const T a)
{
    return int(a+0.5f);
}

class Person
{
private:
  enum { 
    UNTRACK = -1,
    OVERLAP_NUM = 6, 
    HIS_NUM = 33, 
    CHANGE_NUM = 8, 

    // action 
    ACTION_TYPE_NUM = 4, ACTION_HIS_NUM = 10, 
    STAND = 0, WALK = 1, PUNCH = 2, KICK = 3, UNKNOWN = 4,

    // coco joint keypoint
    JOINT_NUM = 18,
    NOSE = 0, NECK = 1, RSHOULDER = 2, RELBOW = 3, RWRIST = 4, LSHOULDER = 5, LELBOW = 6,
    LWRIST = 7, RHIP = 8, RKNEE = 9, RANKLE = 10, LHIP = 11, LKNEE = 12, LANKLE = 13, 
    REYE = 14, LEYE = 15, REAR = 16, LEAR = 17
  };
  struct Joint {
    float x[JOINT_NUM];
    float y[JOINT_NUM];
  };

  struct Change {
    float dist[CHANGE_NUM];
    float deg[CHANGE_NUM];
    float cur_deg[CHANGE_NUM];
  };

  std::deque<Joint> history;
  std::deque<Change> change_history;
  std::deque<int> actions;

  int overlap_count;
  int track_id;
  int action;

  // for bounding box
  float max_x;
  float max_y;
  float min_x;
  float min_y;

  Person* enemy;

public:
  Person() : enemy(nullptr), overlap_count(0), track_id(UNTRACK), action(ACTION_TYPE_NUM), max_x(FLT_MIN), max_y(FLT_MIN), min_x(FLT_MAX), min_y(FLT_MAX) { history.assign(1, {0}); }

  ~Person() { if (enemy != nullptr) enemy->set_enemy(nullptr); }
  // set
  void set_part(int part, float x, float y);
  void set_id(int id) { track_id = id; }
  void set_action(int type);  
  void set_enemy(Person* p) { enemy = p; }
  void set_rect(cv::Rect_<float>& rect);

  // get
  inline int get_id(void) const { return track_id; }
  inline int get_action(void) const { return action; }
  inline const char* get_action_str(void) const { 
    static const char* action_str[] = {"STAND", "WALK", "PUNCH", "KICK", "UNKNOWN"};
    return action_str[action]; 
  }
  inline const Person* get_enemy(void) const { return enemy; }
  cv::Rect_<float> get_rect(void) const;
  cv::Rect_<float> get_crash_rect(const Person& p) const;

  // crash
  inline bool is_danger(void) const { return (action == PUNCH || action == KICK); }
  bool check_crash(const Person& p) const;

  void update(Person* n_p);
  bool has_output(void);
  std::string get_history(void) const;

  // util
  float get_dist(float x1, float y1, float x2, float y2); 
  float get_deg(float x1, float y1, float x2, float y2);

  Person& operator=(const Person& p);
  friend std::ostream& operator<<(std::ostream &out, const Person &p);
};

class People
{
public:
  friend class boost::serialization::access;
  const float thresh = 0.05;
  std::vector<float> keypoints;
  std::vector<int> keyshape;
  float scale;

  People() {}

  People(std::vector<float> _keypoints, std::vector<int> _keyshape, float _scale) :
    keypoints(_keypoints), keyshape(_keyshape), scale(_scale) {}

  inline int get_person_num(void) const { return keyshape[0]; };

  std::vector<Person*> to_person(void);
  std::string get_output(void);
  void render_pose_keypoints(cv::Mat& frame);

  template<class Archive>
  void serialize(Archive & ar, const unsigned int version)
  {
    ar & keypoints;
    ar & keyshape;
    ar & scale;
  }
};

#endif


================================================
FILE: server/src/pose_detector.cpp
================================================
#include <iostream>
#include <vector>
using namespace std;
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
#include "pose_detector.hpp"

#define POSE_MAX_PEOPLE 96
#define NET_OUT_CHANNELS 57 // 38 for pafs, 19 for parts

template<typename T>
inline int intRound(const T a)
{
  return int(a+0.5f);
}

template<typename T>
inline T fastMin(const T a, const T b)
{
  return (a < b ? a : b);
}

void PoseDetector::connect_bodyparts
    (
    vector<float>& pose_keypoints,
    const float* const map,
    const float* const peaks,
    int mapw,
    int maph,
    const int inter_min_above_th,
    const float inter_th,
    const int min_subset_cnt,
    const float min_subset_score,
    vector<int>& keypoint_shape
    )
{
  keypoint_shape.resize(3);
  const int body_part_pairs[] =
  {
    1, 2, 1, 5, 2, 3, 3, 4, 5, 6, 6, 7, 1, 8, 8, 9, 9, 10, 1, 11, 11,
    12, 12, 13, 1, 0, 0, 14, 14, 16, 0, 15, 15, 17, 2, 16, 5, 17
  };
  const int limb_idx[] =
  {
    31, 32, 39, 40, 33, 34, 35, 36, 41, 42, 43, 44, 19, 20, 21, 22, 23, 24, 25,
    26, 27, 28, 29, 30, 47, 48, 49, 50, 53, 54, 51, 52, 55, 56, 37, 38, 45, 46
  };
  const int num_body_parts = 18; // COCO part number
  const int num_body_part_pairs = num_body_parts + 1;
  std::vector<std::pair<std::vector<int>, double>> subset;
  const int subset_counter_index = num_body_parts;
  const int subset_size = num_body_parts + 1;
  const int peaks_offset = 3 * (POSE_MAX_PEOPLE + 1);
  const int map_offset = mapw * maph;

  for (unsigned int pair_index = 0u; pair_index < num_body_part_pairs; ++pair_index)
  {
    const int body_partA = body_part_pairs[2 * pair_index];
    const int body_partB = body_part_pairs[2 * pair_index + 1];
    const float* candidateA = peaks + body_partA*peaks_offset;
    const float* candidateB = peaks + body_partB*peaks_offset;
    const int nA = (int)(candidateA[0]); // number of part A candidates
    const int nB = (int)(candidateB[0]); // number of part B candidates

    // add parts into the subset in special case
    if (nA == 0 || nB == 0)
    {
      // Change w.r.t. other
      if (nA == 0) // nB == 0 or not
      {
        for (int i = 1; i <= nB; ++i)
        {
          bool num = false;
          for (unsigned int j = 0u; j < subset.size(); ++j)
          {
            const int off = body_partB*peaks_offset + i * 3 + 2;
            if (subset[j].first[body_partB] == off)
            {
              num = true;
              break;
            }
          }
          if (!num)
          {
            std::vector<int> row_vector(subset_size, 0);
            // store the index
            row_vector[body_partB] = body_partB*peaks_offset + i * 3 + 2;
            // the parts number of that person
            row_vector[subset_counter_index] = 1;
            // total score
            const float subsetScore = candidateB[i * 3 + 2];
            subset.emplace_back(std::make_pair(row_vector, subsetScore));
          }
        }
      }
      else // if (nA != 0 && nB == 0)
      {
        for (int i = 1; i <= nA; i++)
        {
          bool num = false;
          for (unsigned int j = 0u; j < subset.size(); ++j)
          {
            const int off = body_partA*peaks_offset + i * 3 + 2;
            if (subset[j].first[body_partA] == off)
            {
              num = true;
              break;
            }
          }
          if (!num)
          {
            std::vector<int> row_vector(subset_size, 0);
            // store the index
            row_vector[body_partA] = body_partA*peaks_offset + i * 3 + 2;
            // parts number of that person
            row_vector[subset_counter_index] = 1;
            // total score
            const float subsetScore = candidateA[i * 3 + 2];
            subset.emplace_back(std::make_pair(row_vector, subsetScore));
          }
        }
      }
    }
    else // if (nA != 0 && nB != 0)
    {
      std::vector<std::tuple<double, int, int>> temp;
      const int num_inter = 10;
      // limb PAF x-direction heatmap
      const float* const mapX = map + limb_idx[2 * pair_index] * map_offset;
      // limb PAF y-direction heatmap
      const float* const mapY = map + limb_idx[2 * pair_index + 1] * map_offset;
      // start greedy algorithm
      for (int i = 1; i <= nA; i++)
      {
        for (int j = 1; j <= nB; j++)
        {
          const int dX = candidateB[j * 3] - candidateA[i * 3];
          const int dY = candidateB[j * 3 + 1] - candidateA[i * 3 + 1];
          const float norm_vec = float(std::sqrt(dX*dX + dY*dY));
          // If the peaksPtr are coincident. Don't connect them.
          if (norm_vec > 1e-6)
          {
            const float sX = candidateA[i * 3];
            const float sY = candidateA[i * 3 + 1];
            const float vecX = dX / norm_vec;
            const float vecY = dY / norm_vec;
            float sum = 0.;
            int count = 0;
            for (int lm = 0; lm < num_inter; lm++)
            {
              const int mX = fastMin(mapw - 1, intRound(sX + lm*dX / num_inter));
              const int mY = fastMin(maph - 1, intRound(sY + lm*dY / num_inter));
              const int idx = mY * mapw + mX;
              const float score = (vecX*mapX[idx] + vecY*mapY[idx]);
              if (score > inter_th)
              {
                sum += score;
                ++count;
              }
            }

            // parts score + connection score
            if (count > inter_min_above_th)
            {
              temp.emplace_back(std::make_tuple(sum / count, i, j));
            }
          }
        }
      }
      // select the top minAB connection, assuming that each part occur only once
      // sort rows in descending order based on parts + connection score
      if (!temp.empty())
      {
        std::sort(temp.begin(), temp.end(), std::greater<std::tuple<float, int, int>>());
      }
      std::vector<std::tuple<int, int, double>> connectionK;

      const int minAB = fastMin(nA, nB);
      // assuming that each part occur only once, filter out same part1 to different part2
      std::vector<int> occurA(nA, 0);
      std::vector<int> occurB(nB, 0);
      int counter = 0;
      for (unsigned int row = 0u; row < temp.size(); row++)
      {
        const float score = std::get<0>(temp[row]);
        const int aidx = std::get<1>(temp[row]);
        const int bidx = std::get<2>(temp[row]);
        if (!occurA[aidx - 1] && !occurB[bidx - 1])
        {
          // save two part score "position" and limb mean PAF score
          connectionK.emplace_back(std::make_tuple(body_partA*peaks_offset + aidx * 3 + 2,
                body_partB*peaks_offset + bidx * 3 + 2, score));
          ++counter;
          if (counter == minAB)
          {
            break;
          }
          occurA[aidx - 1] = 1;
          occurB[bidx - 1] = 1;
        }
      }
      // Cluster all the body part candidates into subset based on the part connection
      // initialize first body part connection
      if (pair_index == 0)
      {
        for (const auto connectionKI : connectionK)
        {
          std::vector<int> row_vector(num_body_parts + 3, 0);
          const int indexA = std::get<0>(connectionKI);
          const int indexB = std::get<1>(connectionKI);
          const double score = std::get<2>(connectionKI);
          row_vector[body_part_pairs[0]] = indexA;
          row_vector[body_part_pairs[1]] = indexB;
          row_vector[subset_counter_index] = 2;
          // add the score of parts and the connection
          const double subset_score = peaks[indexA] + peaks[indexB] + score;
          subset.emplace_back(std::make_pair(row_vector, subset_score));
        }
      }
      // Add ears connections (in case person is looking to opposite direction to camera)
      else if (pair_index == 17 || pair_index == 18)
      {
        for (const auto& connectionKI : connectionK)
        {
          const int indexA = std::get<0>(connectionKI);
          const int indexB = std::get<1>(connectionKI);
          for (auto& subsetJ : subset)
          {
            auto& subsetJ_first = subsetJ.first[body_partA];
            auto& subsetJ_first_plus1 = subsetJ.first[body_partB];
            if (subsetJ_first == indexA && subsetJ_first_plus1 == 0)
            {
              subsetJ_first_plus1 = indexB;
            }
            else if (subsetJ_first_plus1 == indexB && subsetJ_first == 0)
            {
              subsetJ_first = indexA;
            }
          }
        }
      }
      else
      {
        if (!connectionK.empty())
        {
          for (unsigned int i = 0u; i < connectionK.size(); ++i)
          {
            const int indexA = std::get<0>(connectionK[i]);
            const int indexB = std::get<1>(connectionK[i]);
            const double score = std::get<2>(connectionK[i]);
            int num = 0;
            // if A is already in the subset, add B
            for (unsigned int j = 0u; j < subset.size(); j++)
            {
              if (subset[j].first[body_partA] == indexA)
              {
                subset[j].first[body_partB] = indexB;
                ++num;
                subset[j].first[subset_counter_index] = subset[j].first[subset_counter_index] + 1;
                subset[j].second = subset[j].second + peaks[indexB] + score;
              }
            }
            // if A is not found in the subset, create new one and add both
            if (num == 0)
            {
              std::vector<int> row_vector(subset_size, 0);
              row_vector[body_partA] = indexA;
              row_vector[body_partB] = indexB;
              row_vector[subset_counter_index] = 2;
              const float subsetScore = peaks[indexA] + peaks[indexB] + score;
              subset.emplace_back(std::make_pair(row_vector, subsetScore));
            }
          }
        }
      }
    }
  }

  // Delete people below thresholds, and save to output
  int number_people = 0;
  std::vector<int> valid_subset_indexes;
  valid_subset_indexes.reserve(fastMin((size_t)POSE_MAX_PEOPLE, subset.size()));
  for (unsigned int index = 0; index < subset.size(); ++index)
  {
    const int subset_counter = subset[index].first[subset_counter_index];
    const double subset_score = subset[index].second;
    if (subset_counter >= min_subset_cnt && (subset_score / subset_counter) > min_subset_score)
    {
      ++number_people;
      valid_subset_indexes.emplace_back(index);
      if (number_people == POSE_MAX_PEOPLE)
      {
        break;
      }
    }
  }

  // Fill and return pose_keypoints
  keypoint_shape = { number_people, (int)num_body_parts, 3 };
  if (number_people > 0)
  {
    pose_keypoints.resize(number_people * (int)num_body_parts * 3);
  }
  else
  {
    pose_keypoints.clear();
  }
  for (unsigned int person = 0u; person < valid_subset_indexes.size(); ++person)
  {
    const auto& subsetI = subset[valid_subset_indexes[person]].first;
    for (int bodyPart = 0u; bodyPart < num_body_parts; bodyPart++)
    {
      const int base_offset = (person*num_body_parts + bodyPart) * 3;
      const int body_part_index = subsetI[bodyPart];
      if (body_part_index > 0)
      {
        pose_keypoints[base_offset] = peaks[body_part_index - 2];
        pose_keypoints[base_offset + 1] = peaks[body_part_index - 1];
        pose_keypoints[base_offset + 2] = peaks[body_part_index];
      }
      else
      {
        pose_keypoints[base_offset] = 0.f;
        pose_keypoints[base_offset + 1] = 0.f;
        pose_keypoints[base_offset + 2] = 0.f;
      }
    }
  }
}

void PoseDetector::find_heatmap_peaks
    (
    const float *src,
    float *dst,
    const int SRCW,
    const int SRCH,
    const int SRC_CH,
    const float TH
    )
{
  // find peaks (8-connected neighbor), weights with 7 by 7 area to get sub-pixel location and response
  const int SRC_PLANE_OFFSET = SRCW * SRCH;
  // add 1 for saving total people count, 3 for x, y, score
  const int DST_PLANE_OFFSET = (POSE_MAX_PEOPLE + 1) * 3;
  float *dstptr = dst;
  int c = 0;
  int x = 0;
  int y = 0;
  int i = 0;
  int j = 0;
  // TODO: reduce multiplication by using pointer
  for(c = 0; c < SRC_CH - 1; ++c)
  {
    int num_people = 0;
    for(y = 1; y < SRCH - 1 && num_people != POSE_MAX_PEOPLE; ++y)
    {
      for(x = 1; x < SRCW - 1 && num_people != POSE_MAX_PEOPLE; ++x)
      {
        int idx  = y * SRCW + x;
        float value = src[idx];
        if (value > TH)
        {
          const float TOPLEFT = src[idx - SRCW - 1];
          const float TOP = src[idx - SRCW];
          const float TOPRIGHT = src[idx - SRCW + 1];
          const float LEFT = src[idx - 1];
          const float RIGHT = src[idx + 1];
          const float BUTTOMLEFT = src[idx + SRCW - 1];
          const float BUTTOM = src[idx + SRCW];
          const float BUTTOMRIGHT = src[idx + SRCW + 1];
          if(value > TOPLEFT && value > TOP && value > TOPRIGHT && value > LEFT &&
              value > RIGHT && value > BUTTOMLEFT && value > BUTTOM && value > BUTTOMRIGHT)
          {
            float x_acc = 0;
            float y_acc = 0;
            float score_acc = 0;
            for (i = -3; i <= 3; ++i)
            {
              int ux = x + i;
              if (ux >= 0 && ux < SRCW)
              {
                for (j = -3; j <= 3; ++j)
                {
                  int uy = y + j;
                  if (uy >= 0 && uy < SRCH)
                  {
                    float score = src[uy * SRCW + ux];
                    x_acc += ux * score;
                    y_acc += uy * score;
                    score_acc += score;
                  }
                }
              }
            }
            x_acc /= score_acc;
            y_acc /= score_acc;
            score_acc = value;
            dstptr[(num_people + 1) * 3 + 0] = x_acc;
            dstptr[(num_people + 1) * 3 + 1] = y_acc;
            dstptr[(num_people + 1) * 3 + 2] = score_acc;
            ++num_people;
          }
        }
      }
    }
    dstptr[0] = num_people;
    src += SRC_PLANE_OFFSET;
    dstptr += DST_PLANE_OFFSET;
  }
}

Mat PoseDetector::create_netsize_im
    (
    const Mat &im,
    const int netw,
    const int neth,
    float *scale
    )
{
  // for tall image
  int newh = neth;
  float s = newh / (float)im.rows;
  int neww = im.cols * s;
  if (neww > netw)
  {
    //for fat image
    neww = netw;
    s = neww / (float)im.cols;
    newh = im.rows * s;
  }

  *scale = 1 / s;
  Rect dst_area(0, 0, neww, newh);
  Mat dst = Mat::zeros(neth, netw, CV_8UC3);
  resize(im, dst(dst_area), Size(neww, newh));
  return dst;
}

PoseDetector::PoseDetector(const char *cfg_path, const char *weight_path, int gpu_id) : Detector(cfg_path, weight_path,
gpu_id) {
  det_people = nullptr;
  // initialize net
  net_inw = get_net_width();
  net_inh = get_net_height();
  net_outw = get_net_out_width();
  net_outh = get_net_out_height();
}

void PoseDetector::detect(cv::Mat im, float thresh) {
  // 3. resize to net input size, put scaled image on the top left
  float scale = 0.0f;
  Mat netim = create_netsize_im(im, net_inw, net_inh, &scale);

  // 4. normalized to float type
  netim.convertTo(netim, CV_32F, 1 / 256.f, -0.5);

  // 5. split channels
  float *netin_data = new float[net_inw * net_inh * 3]();
  float *netin_data_ptr = netin_data;
  vector<Mat> input_channels;
  for (int i = 0; i < 3; ++i)
  {
    Mat channel(net_inh, net_inw, CV_32FC1, netin_data_ptr);
    input_channels.emplace_back(channel);
    netin_data_ptr += (net_inw * net_inh);
  }
  split(netim, input_channels);

  // 6. feed forward
  double time_begin = getTickCount();
  float *netoutdata = Detector::predict(netin_data);
  double fee_time = (getTickCount() - time_begin) / getTickFrequency() * 1000;
#ifdef DEBUG
  cout << "forward fee: " << fee_time << "ms" << endl;
#endif
  // 7. resize net output back to input size to get heatmap
  float *heatmap = new float[net_inw * net_inh * NET_OUT_CHANNELS];
  for (int i = 0; i < NET_OUT_CHANNELS; ++i)
  {
    Mat netout(net_outh, net_outw, CV_32F, (netoutdata + net_outh*net_outw*i));
    Mat nmsin(net_inh, net_inw, CV_32F, heatmap + net_inh*net_inw*i);
    resize(netout, nmsin, Size(net_inw, net_inh), 0, 0, CV_INTER_CUBIC);
  }

  // 8. get heatmap peaks
  float *heatmap_peaks = new float[3 * (POSE_MAX_PEOPLE+1) * (NET_OUT_CHANNELS-1)];
  find_heatmap_peaks(heatmap, heatmap_peaks, net_inw, net_inh, NET_OUT_CHANNELS, 0.05);

  // 9. link parts
  vector<float> keypoints;
  vector<int> shape;
  connect_bodyparts(keypoints, heatmap, heatmap_peaks, net_inw, net_inh, 9, 0.05, 6, 0.4, shape);

  delete [] heatmap_peaks;
  delete [] heatmap;
  delete [] netin_data;

  // people
  if (det_people != nullptr)
    delete det_people;
  det_people = new People(keypoints, shape, scale);
}

void PoseDetector::draw(cv::Mat mat)
{
  det_people->render_pose_keypoints(mat);
}

std::string PoseDetector::det_to_json(int frame_id)
{
  std::string out_str;
  char* tmp_buf = (char *)calloc(1024, sizeof(char));
  sprintf(tmp_buf, "{\n \"frame_id\":%d, \n ", frame_id);
  out_str = tmp_buf;
  out_str += det_people->get_output();
  out_str += "\n}";
  free(tmp_buf);
  return out_str;
}

PoseDetector::~PoseDetector() {
}


================================================
FILE: server/src/pose_detector.hpp
================================================
#ifndef __POSE_DETECTOR
#define __POSE_DETECTOR
#include <vector>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "yolo_v2_class.hpp"
#include "DetectorInterface.hpp"
#include "people.hpp"

class PoseDetector : public Detector, public DetectorInterface
{
  private:
    int net_inw;
    int net_inh;
    int net_outw;
    int net_outh;
    People* det_people;
  public:
    PoseDetector(const char *cfg_path, const char *weight_path, int gpu_id);
    ~PoseDetector();
    inline People* get_people(void) { return det_people; }
    virtual void detect(cv::Mat mat, float thresh);
    virtual void draw(cv::Mat mat);
    virtual std::string det_to_json(int frame_id);
  private:
    void connect_bodyparts
      (
       std::vector<float>& pose_keypoints,
       const float* const map,
       const float* const peaks,
       int mapw,
       int maph,
       const int inter_min_above_th,
       const float inter_th,
       const int min_subset_cnt,
       const float min_subset_score,
       std::vector<int>& keypoint_shape
      );

    void find_heatmap_peaks
      (
       const float *src,
       float *dst,
       const int SRCW,
       const int SRCH,
       const int SRC_CH,
       const float TH
      );

    cv::Mat create_netsize_im
      (
       const cv::Mat &im,
       const int netw,
       const int neth,
       float *scale
      );
};

#endif


================================================
FILE: server/src/share_queue.h
================================================
#pragma once
// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe
#include <deque>
#include <mutex>
#include <condition_variable>

template <typename T>
class SharedQueue
{
public:
	SharedQueue();
	~SharedQueue();

	T& front();
	void pop_front();

	void push_back(const T& item);
	void push_back(T&& item);

	int size();
	bool empty();

private:
	std::deque<T> queue_;
	std::mutex mutex_;
	std::condition_variable cond_;
};

template <typename T>
SharedQueue<T>::SharedQueue() {}

template <typename T>
SharedQueue<T>::~SharedQueue() {}

template <typename T>
T& SharedQueue<T>::front()
{
	std::unique_lock<std::mutex> mlock(mutex_);
	while (queue_.empty())
	{
		cond_.wait(mlock);
	}
	return queue_.front();
}

template <typename T>
void SharedQueue<T>::pop_front()
{
	std::unique_lock<std::mutex> mlock(mutex_);
	while (queue_.empty())
	{
		cond_.wait(mlock);
	}
	queue_.pop_front();
}

template <typename T>
void SharedQueue<T>::push_back(const T& item)
{
	std::unique_lock<std::mutex> mlock(mutex_);
	queue_.push_back(item);
	mlock.unlock();     // unlock before notificiation to minimize mutex con
	cond_.notify_one(); // notify one waiting thread

}

template <typename T>
void SharedQueue<T>::push_back(T&& item)
{
	std::unique_lock<std::mutex> mlock(mutex_);
	queue_.push_back(std::move(item));
	mlock.unlock();     // unlock before notificiation to minimize mutex con
	cond_.notify_one(); // notify one waiting thread

}

template <typename T>
int SharedQueue<T>::size()
{
	std::unique_lock<std::mutex> mlock(mutex_);
	int size = queue_.size();
	mlock.unlock();
	return size;
}


================================================
FILE: server/src/sink.cpp
================================================
#include <zmq.h>
#include <stdio.h>
#include <iostream>
#include <sstream>
#include <fstream>
#include <string>
#include <assert.h>
#include <map>
#include <csignal>
#include <boost/archive/text_oarchive.hpp>
#include <tbb/concurrent_hash_map.h>
#include <opencv2/opencv.hpp>
#include "share_queue.h"
#include "people.hpp"
#include "Tracker.hpp"
#include "frame.hpp"

// ZMQ
void *sock_pull;
void *sock_pub;
void *sock_rnn;

// ShareQueue
tbb::concurrent_hash_map<int, Frame> frame_map;
SharedQueue<Frame> processed_frame_queue;

// pool
Frame_pool *frame_pool;

// signal
volatile bool exit_flag = false;
void sig_handler(int s)
{
  exit_flag = true;
}

void *recv_in_thread(void *ptr)
{
  int recv_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;

  while(!exit_flag) {
    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);

    if (recv_json_len > 0) {
      frame = frame_pool->alloc_frame();
      json_buf[recv_json_len] = '\0';
      json_to_frame(json_buf, frame);

#ifdef DEBUG
      std::cout << "Sink | Recv From Worker | SEQ : " << frame.seq_buf
        << " LEN : " << frame.msg_len << std::endl;
#endif

      tbb::concurrent_hash_map<int, Frame>::accessor a;
      while(1)
      {
        if(frame_map.insert(a, atoi((char *)frame.seq_buf))) {
          a->second = frame;
          break;
        }
      }
    }
  }
}

void *send_in_thread(void *ptr)
{
  int send_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;

  while(!exit_flag) {
    if (processed_frame_queue.size() > 0) {
      frame = processed_frame_queue.front();
      processed_frame_queue.pop_front();

#ifdef DEBUG
      std::cout << "Sink | Pub To Client | SEQ : " << frame.seq_buf
        << " LEN : " << frame.msg_len << std::endl;
#endif

      send_json_len = frame_to_json(json_buf, frame);
      zmq_send(sock_pub, json_buf, send_json_len, 0);

      frame_pool->free_frame(frame);
    }
  }
}

int main()
{
  // ZMQ
  int ret;
  void *context = zmq_ctx_new();

  sock_pull = zmq_socket(context, ZMQ_PULL);
  ret = zmq_bind(sock_pull, "ipc://processed");
  assert(ret != -1);

  sock_pub = zmq_socket(context, ZMQ_PUB);
  ret = zmq_bind(sock_pub, "tcp://*:5570");
  assert(ret != -1);

  sock_rnn = zmq_socket(context, ZMQ_REQ);
  ret = zmq_connect(sock_rnn, "ipc://action");
  assert(ret != -1);

  // frame_pool
  frame_pool = new Frame_pool(5000);

  // Thread
  pthread_t recv_thread;
  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_t send_thread;
  if (pthread_create(&send_thread, 0, send_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_detach(send_thread);
  pthread_detach(recv_thread);

  // serialize 
  std::stringstream ss;
  std::stringbuf *pbuf = ss.rdbuf();

  // frame
  Frame frame;
  int frame_len;
  unsigned char *frame_buf_ptr;
  char json_tmp_buf[1024];

  // Tracker
  Tracker tracker;
  TrackingBox tb;
  std::vector<TrackingBox> det_data;
  std::vector<TrackingBox> track_data;
  volatile int track_frame = 1;
  int init_flag = 0;

  // person
  std::vector<Person*> person_data;
  std::stringstream p_ss;
  std::ofstream output_file("output.txt");

  // rnn
  unsigned char rnn_buf[100];

  // draw
  std::vector<int> param = {cv::IMWRITE_JPEG_QUALITY, 60 };
  const int CNUM = 20;
  cv::RNG rng(0xFFFFFFFF);
  cv::Scalar_<int> randColor[CNUM];
  for (int i = 0; i < CNUM; i++)
    rng.fill(randColor[i], cv::RNG::UNIFORM, 0, 256);

  while(!exit_flag) {
    if (!frame_map.empty()) {
      tbb::concurrent_hash_map<int, Frame>::accessor c_a;

      if (frame_map.find(c_a, (const int)track_frame))
      {
        frame = (Frame)c_a->second;
        while(1) {
          if (frame_map.erase(c_a))
            break;
        }

        // unsigned char array -> vector
        frame_len = frame.msg_len;
        frame_buf_ptr = frame.msg_buf;
        std::vector<unsigned char> raw_vec(frame_buf_ptr, frame_buf_ptr + frame_len);
        
        // vector -> mat
        cv::Mat raw_mat = cv::imdecode(cv::Mat(raw_vec), 1);

        // get people & unserialize
        ss.str(""); // ss clear
        pbuf->sputn((const char*)frame.det_buf, frame.det_len);
        People people;
        {
          boost::archive::text_iarchive ia(ss);
          ia >> people;
        }   

        // detect people result to json
        std::string det_json;
        sprintf(json_tmp_buf, "{\n \"frame_id\":%d, \n ", track_frame);
        det_json = json_tmp_buf;
        det_json += people.get_output();
        det_json += "\n}";

        frame.det_len = det_json.size();
        memcpy(frame.det_buf, det_json.c_str(), frame.det_len);
        frame.det_buf[frame.det_len] = '\0';

        // draw people skeleton
        people.render_pose_keypoints(raw_mat);

        // people to person
        person_data.clear();
        person_data = people.to_person(); 

        // ready to track
        det_data.clear();
        for (auto it = person_data.begin(); it != person_data.end(); it++) {
          tb.frame = track_frame;
          tb.box = (*it)->get_rect();
          tb.p = (*it);
          det_data.push_back(tb);
        }

        // not detect people
        if (det_data.size() < 1) {
          processed_frame_queue.push_back(frame);
          track_frame++;
          continue;
        }

        // Track
        track_data.clear();
        if (init_flag == 0) {
          track_data = tracker.init(det_data);
          init_flag = 1;
        }
        else {
          track_data = tracker.update(det_data);
        }
  
        p_ss.str("");
        for (unsigned int i = 0; i < track_data.size(); i++) {
          // get person 
          Person* track_person = track_data[i].p;
          
          // get history
          if (i != 0)
            p_ss << '\n';
          p_ss << track_person->get_history();

          // get_output for train
          if (track_person->has_output()) {
            output_file << *(track_person);
          }
        }
        
        
        // RNN action
        double time_begin = getTickCount(); 

        std::string hists = p_ss.str();
        if (hists.size() > 0) {
          zmq_send(sock_rnn, hists.c_str(), hists.size(), 0);
          zmq_recv(sock_rnn, rnn_buf, 100, 0);

          // action update
          for (unsigned int i = 0; i < track_data.size(); i++) {
            track_data[i].p->set_action(rnn_buf[i] - '0');
          }
        }  

        double fee_time = (getTickCount() - time_begin) / getTickFrequency() * 1000;
#ifdef DEBUG
        std::cout << "RNN fee: " << fee_time << "ms | T : " << track_frame << std::endl;
#endif

        // check crash
        for (unsigned int i = 0; i < track_data.size(); i++) {
          Person* me = track_data[i].p;
          if (me->is_danger()) { // punch or kick
            for (unsigned int j = 0; j < track_data.size(); j++) {
              if (j == i)
                continue;
              Person* other = track_data[j].p;

              // if me and other crash
              if (me->check_crash(*other)) {
                //std::cout << "CRASH !!! " << me->get_id() << " : " << other->get_id() << std::endl;
                me->set_enemy(other);
                other->set_enemy(me);
              }
            }
          }

          // draw fight bounding box
          const Person *my_enemy = me->get_enemy();
          if (my_enemy != nullptr) {
            cv::Rect_<float> crash_rect = me->get_crash_rect(*my_enemy);

            cv::putText(raw_mat, "FIGHT", cv::Point(crash_rect.x, crash_rect.y), cv::FONT_HERSHEY_DUPLEX, 0.8, cv::Scalar(0, 255, 0), 2);
            cv::rectangle(raw_mat, crash_rect, cv::Scalar(0, 255, 0), 2, 8, 0);
          }
        }
        
        
        // draw Track data
        for (auto td: track_data) {
          // draw track_id
          cv::putText(raw_mat, to_string(td.id), cv::Point(td.box.x, td.box.y), cv::FONT_HERSHEY_DUPLEX, 0.8, randColor[td.id % CNUM], 2);
          // draw person bounding box
          cv::putText(raw_mat, td.p->get_action_str(), cv::Point(td.box.x, td.box.y + 30), cv::FONT_HERSHEY_DUPLEX, 0.8,
          randColor[td.id % CNUM], 2);
          cv::rectangle(raw_mat, td.box, randColor[td.id % CNUM], 2, 8, 0);
        }

        // draw_mat -> vector
        std::vector<unsigned char> res_vec;
        cv::imencode(".jpg", raw_mat, res_vec, param);

        // vector -> frame array
        frame.msg_len = res_vec.size();
        std::copy(res_vec.begin(), res_vec.end(), frame.msg_buf);

        processed_frame_queue.push_back(frame);
        track_frame++;
      }
    }
  }

  delete frame_pool;

  output_file.close();

  zmq_close(sock_pull);
  zmq_close(sock_pub);
  zmq_close(sock_rnn);

  zmq_ctx_destroy(context);
  return 0;
}


================================================
FILE: server/src/ventilator.cpp
================================================
#include <zmq.h>
#include <iostream>
#include <cassert>
#include <pthread.h>
#include <csignal>
#include "share_queue.h"
#include "frame.hpp"

// ZMQ
void *sock_pull;
void *sock_push;

// ShareQueue
SharedQueue<Frame> frame_queue;

// pool
Frame_pool *frame_pool;

// signal
volatile bool exit_flag = false;
void sig_handler(int s)
{
  exit_flag = true;
}

void *recv_in_thread(void *ptr)
{
  int recv_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;

  while(!exit_flag) {
    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);

    if (recv_json_len > 0) {
      frame = frame_pool->alloc_frame();
      json_buf[recv_json_len] = '\0';
      json_to_frame(json_buf, frame);
#ifdef DEBUG
      std::cout << "Ventilator | Recv From Client | SEQ : " << frame.seq_buf 
        << " LEN : " << frame.msg_len << std::endl;
#endif
      frame_queue.push_back(frame);
    }
  }
}

void *send_in_thread(void *ptr)
{
  int send_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;
  while(!exit_flag) {
    if (frame_queue.size() > 0) {
      frame = frame_queue.front();
      frame_queue.pop_front();

#ifdef DEBUG
      std::cout << "Ventilator | Send To Worker | SEQ : " << frame.seq_buf 
        << " LEN : " << frame.msg_len << std::endl;
#endif

      send_json_len = frame_to_json(json_buf, frame);
      zmq_send(sock_push, json_buf, send_json_len, 0);

      frame_pool->free_frame(frame);
    }
  }
}
int main()
{
  // ZMQ
  int ret;
  void *context = zmq_ctx_new(); 
  sock_pull = zmq_socket(context, ZMQ_PULL);
  ret = zmq_bind(sock_pull, "tcp://*:5575");
  assert(ret != -1);

  sock_push = zmq_socket(context, ZMQ_PUSH);
  ret = zmq_bind(sock_push, "ipc://unprocessed");
  assert(ret != -1);

  // frame_pool
  frame_pool = new Frame_pool();

  // Thread
  pthread_t recv_thread;
  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_t send_thread;
  if (pthread_create(&send_thread, 0, send_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_detach(send_thread);
  pthread_detach(recv_thread);

  while(!exit_flag);

  delete frame_pool;
  zmq_close(sock_pull);
  zmq_close(sock_push);
  zmq_ctx_destroy(context);
}


================================================
FILE: server/src/worker.cpp
================================================
#include <zmq.h>
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
#include <memory>
#include <chrono>
#include <csignal>
#include <assert.h>
#include <pthread.h>
#include <boost/archive/text_oarchive.hpp>
#include "share_queue.h"
#include "frame.hpp"
#include "args.hpp"
#include "pose_detector.hpp"
// opencv
#include <opencv2/opencv.hpp>			// C++

// ZMQ
void *sock_pull;
void *sock_push;

// ShareQueue
SharedQueue<Frame> unprocessed_frame_queue;
SharedQueue<Frame> processed_frame_queue;;

// pool
Frame_pool *frame_pool;

// signal
volatile bool exit_flag = false;
void sig_handler(int s)
{
  exit_flag = true;
}

void *recv_in_thread(void *ptr)
{
  int recv_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;

  while(!exit_flag) {
    recv_json_len = zmq_recv(sock_pull, json_buf, JSON_BUF_LEN, ZMQ_NOBLOCK);
    
    if (recv_json_len > 0) {
      frame = frame_pool->alloc_frame();
      json_buf[recv_json_len] = '\0';
      json_to_frame(json_buf, frame);

#ifdef DEBUG
      std::cout << "Worker | Recv From Ventilator | SEQ : " << frame.seq_buf 
        << " LEN : " << frame.msg_len << std::endl;
#endif
      unprocessed_frame_queue.push_back(frame);
    }
  }
}

void *send_in_thread(void *ptr)
{
  int send_json_len;
  unsigned char json_buf[JSON_BUF_LEN];
  Frame frame;

  while(!exit_flag) {
    if (processed_frame_queue.size() > 0) {
      frame = processed_frame_queue.front();
      processed_frame_queue.pop_front();

#ifdef DEBUG
      std::cout << "Worker | Send To Sink | SEQ : " << frame.seq_buf
        << " LEN : " << frame.msg_len << std::endl;
#endif
      send_json_len = frame_to_json(json_buf, frame);
      zmq_send(sock_push, json_buf, send_json_len, 0);

      frame_pool->free_frame(frame);
    }
  }
}

int main(int argc, char *argv[])
{
	if (argc < 2) {
    fprintf(stderr, "usage: %s <cfg> <weights> [-gpu GPU_ID] [-thresh THRESH]\n", argv[0]);
    return 0;
  }

  const char *cfg_path = argv[1];
  const char *weights_path = argv[2];
  int gpu_id = find_int_arg(argc, argv, "-gpu", 0);
  float thresh = find_float_arg(argc, argv, "-thresh", 0.2);
  fprintf(stdout, "cfg : %s, weights : %s, gpu-id : %d, thresh : %f\n", 
      cfg_path, weights_path, gpu_id, thresh);

  // opencv
  std::vector<int> param = {cv::IMWRITE_JPEG_QUALITY, 60 };

  // ZMQ
  int ret;

  void *context = zmq_ctx_new();

  sock_pull = zmq_socket(context, ZMQ_PULL);
  ret = zmq_connect(sock_pull, "ipc://unprocessed");
  assert(ret != -1);

  sock_push = zmq_socket(context, ZMQ_PUSH);
  ret = zmq_connect(sock_push, "ipc://processed");
  assert(ret != -1);

  // frame_pool
  frame_pool = new Frame_pool(5000);

  // Thread
  pthread_t recv_thread;
  if (pthread_create(&recv_thread, 0, recv_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_t send_thread;
  if (pthread_create(&send_thread, 0, send_in_thread, 0))
    std::cerr << "Thread creation failed (recv_thread)" << std::endl;

  pthread_detach(send_thread);
  pthread_detach(recv_thread);

  // darkent openpose detector
  PoseDetector detector(cfg_path, weights_path, gpu_id);
  People* det_people = nullptr;
  std::stringstream ss;

  // frame
  Frame frame;
  int frame_len;
  unsigned char *frame_buf_ptr;

  // time
  auto time_begin = std::chrono::steady_clock::now();
  auto time_end = std::chrono::steady_clock::now();
  double det_time;

  while(!exit_flag) {
    // recv from ventilator
    if (unprocessed_frame_queue.size() > 0) {
      frame = unprocessed_frame_queue.front();
      unprocessed_frame_queue.pop_front();

      frame_len = frame.msg_len;
      frame_buf_ptr = frame.msg_buf;

      // unsigned char array -> vector
      std::vector<unsigned char> raw_vec(frame_buf_ptr, frame_buf_ptr + frame_len);

      // vector -> mat
      cv::Mat raw_mat = cv::imdecode(cv::Mat(raw_vec), 1);
      
      // detect pose
      time_begin = std::chrono::steady_clock::now();
      detector.detect(raw_mat, thresh);
      det_people = detector.get_people();
      time_end = std::chrono::steady_clock::now();
      det_time = std::chrono::duration <double, std::milli> (time_end - time_begin).count();
#ifdef DEBUG
      std::cout << "Darknet | Detect | SEQ : " << frame.seq_buf << " Time : " << det_time << "ms" << std::endl;
#endif 
      // det_people & serialize
      ss.str(""); // ss clear
      {
        boost::archive::text_oarchive oa(ss);
        oa << *det_people;
      }
      std::string ser_people = ss.str();

      frame.det_len = ser_people.size();
      memcpy(frame.det_buf, ser_people.c_str(), frame.det_len);
      frame.det_buf[frame.det_len] = '\0';

      // mat -> vector
      std::vector<unsigned char> res_vec;
      cv::imencode(".jpg", raw_mat, res_vec, param);

      // vector -> frame array
      frame.msg_len = res_vec.size();
      std::copy(res_vec.begin(), res_vec.end(), frame.msg_buf);

      // push to processed frame_queue
      processed_frame_queue.push_back(frame);
    }
  }

	delete frame_pool;
  zmq_close(sock_pull);
  zmq_close(sock_push);
  zmq_ctx_destroy(context);

  return 0;
}


================================================
FILE: server/src/yolo_v2_class.hpp
================================================
#ifndef YOLO_V2_CLASS_HPP
#define YOLO_V2_CLASS_HPP

#ifndef LIB_API
#ifdef LIB_EXPORTS
#if defined(_MSC_VER)
#define LIB_API __declspec(dllexport)
#else
#define LIB_API __attribute__((visibility("default")))
#endif
#else
#if defined(_MSC_VER)
#define LIB_API
#else
#define LIB_API
#endif
#endif
#endif

#define C_SHARP_MAX_OBJECTS 1000

struct bbox_t {
    unsigned int x, y, w, h;       // (x,y) - top-left corner, (w, h) - width & height of bounded box
    float prob;                    // confidence - probability that the object was found correctly
    unsigned int obj_id;           // class of object - from range [0, classes-1]
    unsigned int track_id;         // tracking id for video (0 - untracked, 1 - inf - tracked object)
    unsigned int frames_counter;   // counter of frames on which the object was detected
    float x_3d, y_3d, z_3d;        // center of object (in Meters) if ZED 3D Camera is used
};

struct image_t {
    int h;                        // height
    int w;                        // width
    int c;                        // number of chanels (3 - for RGB)
    float *data;                  // pointer to the image data
};

struct bbox_t_container {
    bbox_t candidates[C_SHARP_MAX_OBJECTS];
};

#ifdef __cplusplus
#include <memory>
#include <vector>
#include <deque>
#include <algorithm>
#include <chrono>
#include <string>
#include <sstream>
#include <iostream>
#include <cmath>

#ifdef OPENCV
#include <opencv2/opencv.hpp>            // C++
#include <opencv2/highgui/highgui_c.h>   // C
#include <opencv2/imgproc/imgproc_c.h>   // C
#endif

extern "C" LIB_API int init(const char *configurationFilename, const char *weightsFilename, int gpu);
extern "C" LIB_API int detect_image(const char *filename, bbox_t_container &container);
extern "C" LIB_API int detect_mat(const uint8_t* data, const size_t data_length, bbox_t_container &container);
extern "C" LIB_API int dispose();
extern "C" LIB_API int get_device_count();
extern "C" LIB_API int get_device_name(int gpu, char* deviceName);
extern "C" LIB_API bool built_with_cuda();
extern "C" LIB_API bool built_with_cudnn();
extern "C" LIB_API bool built_with_opencv();
extern "C" LIB_API void send_json_custom(char const* send_buf, int port, int timeout);

class Detector {
    std::shared_ptr<void> detector_gpu_ptr;
    std::deque<std::vector<bbox_t>> prev_bbox_vec_deque;
public:
    const int cur_gpu_id;
    float nms = .4;
    bool wait_stream;

    LIB_API Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
    LIB_API ~Detector();

    LIB_API std::vector<bbox_t> detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
    LIB_API std::vector<bbox_t> detect(image_t img, float thresh = 0.2, bool use_mean = false);
    static LIB_API image_t load_image(std::string image_filename);
    static LIB_API void free_image(image_t m);
    LIB_API int get_net_width() const;
    LIB_API int get_net_height() const;
    LIB_API int get_net_out_width() const;
    LIB_API int get_net_out_height() const;
    LIB_API int get_net_color_depth() const;
    LIB_API float *predict(float *input) const;

    LIB_API std::vector<bbox_t> tracking_id(std::vector<bbox_t> cur_bbox_vec, bool const change_history = true,
                                                int const frames_story = 5, int const max_dist = 40);

    LIB_API void *get_cuda_context();

    //LIB_API bool send_json_http(std::vector<bbox_t> cur_bbox_vec, std::vector<std::string> obj_names, int frame_id,
    //    std::string filename = std::string(), int timeout = 400000, int port = 8070);

    std::vector<bbox_t> detect_resized(image_t img, int init_w, int init_h, float thresh = 0.2, bool use_mean = false)
    {
        if (img.data == NULL)
            throw std::runtime_error("Image is empty");
        auto detection_boxes = detect(img, thresh, use_mean);
        float wk = (float)init_w / img.w, hk = (float)init_h / img.h;
        for (auto &i : detection_boxes) i.x *= wk, i.w *= wk, i.y *= hk, i.h *= hk;
        return detection_boxes;
    }

#ifdef OPENCV
    std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false)
    {
        if(mat.data == NULL)
            throw std::runtime_error("Image is empty");
        auto image_ptr = mat_to_image_resize(mat);
        return detect_resized(*image_ptr, mat.cols, mat.rows, thresh, use_mean);
    }

    std::shared_ptr<image_t> mat_to_image_resize(cv::Mat mat) const
    {
        if (mat.data == NULL) return std::shared_ptr<image_t>(NULL);

        cv::Size network_size = cv::Size(get_net_width(), get_net_height());
        cv::Mat det_mat;
        if (mat.size() != network_size)
            cv::resize(mat, det_mat, network_size);
        else
            det_mat = mat;  // only reference is copied

        return mat_to_image(det_mat);
    }

    static std::shared_ptr<image_t> mat_to_image(cv::Mat img_src)
    {
        cv::Mat img;
        if (img_src.channels() == 4) cv::cvtColor(img_src, img, cv::COLOR_RGBA2BGR);
        else if (img_src.channels() == 3) cv::cvtColor(img_src, img, cv::COLOR_RGB2BGR);
        else if (img_src.channels() == 1) cv::cvtColor(img_src, img, cv::COLOR_GRAY2BGR);
        else std::cerr << " Warning: img_src.channels() is not 1, 3 or 4. It is = " << img_src.channels() << std::endl;
        std::shared_ptr<image_t> image_ptr(new image_t, [](image_t *img) { free_image(*img); delete img; });
        *image_ptr = mat_to_image_custom(img);
        return image_ptr;
    }

private:

    static image_t mat_to_image_custom(cv::Mat mat)
    {
        int w = mat.cols;
        int h = mat.rows;
        int c = mat.channels();
        image_t im = make_image_custom(w, h, c);
        unsigned char *data = (unsigned char *)mat.data;
        int step = mat.step;
        for (int y = 0; y < h; ++y) {
            for (int k = 0; k < c; ++k) {
                for (int x = 0; x < w; ++x) {
                    im.data[k*w*h + y*w + x] = data[y*step + x*c + k] / 255.0f;
                }
            }
        }
        return im;
    }

    static image_t make_empty_image(int w, int h, int c)
    {
        image_t out;
        out.data = 0;
        out.h = h;
        out.w = w;
        out.c = c;
        return out;
    }

    static image_t make_image_custom(int w, int h, int c)
    {
        image_t out = make_empty_image(w, h, c);
        out.data = (float *)calloc(h*w*c, sizeof(float));
        return out;
    }

#endif    // OPENCV

public:

    bool send_json_http(std::vector<bbox_t> cur_bbox_vec, std::vector<std::string> obj_names, int frame_id,
        std::string filename = std::string(), int timeout = 400000, int port = 8070)
    {
        std::string send_str;

        char *tmp_buf = (char *)calloc(1024, sizeof(char));
        if (!filename.empty()) {
            sprintf(tmp_buf, "{\n \"frame_id\":%d, \n \"filename\":\"%s\", \n \"objects\": [ \n", frame_id, filename.c_str());
        }
        else {
            sprintf(tmp_buf, "{\n \"frame_id\":%d, \n \"objects\": [ \n", frame_id);
        }
        send_str = tmp_buf;
        free(tmp_buf);

        for (auto & i : cur_bbox_vec) {
            char *buf = (char *)calloc(2048, sizeof(char));

            sprintf(buf, "  {\"class_id\":%d, \"name\":\"%s\", \"absolute_coordinates\":{\"center_x\":%d, \"center_y\":%d, \"width\":%d, \"height\":%d}, \"confidence\":%f",
                i.obj_id, obj_names[i.obj_id].c_str(), i.x, i.y, i.w, i.h, i.prob);

            //sprintf(buf, "  {\"class_id\":%d, \"name\":\"%s\", \"relative_coordinates\":{\"center_x\":%f, \"center_y\":%f, \"width\":%f, \"height\":%f}, \"confidence\":%f",
            //    i.obj_id, obj_names[i.obj_id], i.x, i.y, i.w, i.h, i.prob);

            send_str += buf;

            if (!std::isnan(i.z_3d)) {
                sprintf(buf, "\n    , \"coordinates_in_meters\":{\"x_3d\":%.2f, \"y_3d\":%.2f, \"z_3d\":%.2f}",
                    i.x_3d, i.y_3d, i.z_3d);
                send_str += buf;
            }

            send_str += "}\n";

            free(buf);
        }

        //send_str +=  "\n ] \n}, \n";
        send_str += "\n ] \n}";

        send_json_custom(send_str.c_str(), port, timeout);
        return true;
    }
};
// --------------------------------------------------------------------------------


#if defined(TRACK_OPTFLOW) && defined(OPENCV) && defined(GPU)

#include <opencv2/cudaoptflow.hpp>
#include <opencv2/cudaimgproc.hpp>
#include <opencv2/cudaarithm.hpp>
#include <opencv2/core/cuda.hpp>

class Tracker_optflow {
public:
    const int gpu_count;
    const int gpu_id;
    const int flow_error;


    Tracker_optflow(int _gpu_id = 0, int win_size = 15, int max_level = 3, int iterations = 8000, int _flow_error = -1) :
        gpu_count(cv::cuda::getCudaEnabledDeviceCount()), gpu_id(std::min(_gpu_id, gpu_count-1)),
        flow_error((_flow_error > 0)? _flow_error:(win_size*4))
    {
        int const old_gpu_id = cv::cuda::getDevice();
        cv::cuda::setDevice(gpu_id);

        stream = cv::cuda::Stream();

        sync_PyrLKOpticalFlow_gpu = cv::cuda::SparsePyrLKOpticalFlow::create();
        sync_PyrLKOpticalFlow_gpu->setWinSize(cv::Size(win_size, win_size));    // 9, 15, 21, 31
        sync_PyrLKOpticalFlow_gpu->setMaxLevel(max_level);        // +- 3 pt
        sync_PyrLKOpticalFlow_gpu->setNumIters(iterations);    // 2000, def: 30

        cv::cuda::setDevice(old_gpu_id);
    }

    // just to avoid extra allocations
    cv::cuda::GpuMat src_mat_gpu;
    cv::cuda::GpuMat dst_mat_gpu, dst_grey_gpu;
    cv::cuda::GpuMat prev_pts_flow_gpu, cur_pts_flow_gpu;
    cv::cuda::GpuMat status_gpu, err_gpu;

    cv::cuda::GpuMat src_grey_gpu;    // used in both functions
    cv::Ptr<cv::cuda::SparsePyrLKOpticalFlow> sync_PyrLKOpticalFlow_gpu;
    cv::cuda::Stream stream;

    std::vector<bbox_t> cur_bbox_vec;
    std::vector<bool> good_bbox_vec_flags;
    cv::Mat prev_pts_flow_cpu;

    void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    {
        cur_bbox_vec = _cur_bbox_vec;
        good_bbox_vec_flags = std::vector<bool>(cur_bbox_vec.size(), true);
        cv::Mat prev_pts, cur_pts_flow_cpu;

        for (auto &i : cur_bbox_vec) {
            float x_center = (i.x + i.w / 2.0F);
            float y_center = (i.y + i.h / 2.0F);
            prev_pts.push_back(cv::Point2f(x_center, y_center));
        }

        if (prev_pts.rows == 0)
            prev_pts_flow_cpu = cv::Mat();
        else
            cv::transpose(prev_pts, prev_pts_flow_cpu);

        if (prev_pts_flow_gpu.cols < prev_pts_flow_cpu.cols) {
            prev_pts_flow_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), prev_pts_flow_cpu.type());
            cur_pts_flow_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), prev_pts_flow_cpu.type());

            status_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), CV_8UC1);
            err_gpu = cv::cuda::GpuMat(prev_pts_flow_cpu.size(), CV_32FC1);
        }

        prev_pts_flow_gpu.upload(cv::Mat(prev_pts_flow_cpu), stream);
    }


    void update_tracking_flow(cv::Mat src_mat, std::vector<bbox_t> _cur_bbox_vec)
    {
        int const old_gpu_id = cv::cuda::getDevice();
        if (old_gpu_id != gpu_id)
            cv::cuda::setDevice(gpu_id);

        if (src_mat.channels() == 1 || src_mat.channels() == 3 || src_mat.channels() == 4) {
            if (src_mat_gpu.cols == 0) {
                src_mat_gpu = cv::cuda::GpuMat(src_mat.size(), src_mat.type());
                src_grey_gpu = cv::cuda::GpuMat(src_mat.size(), CV_8UC1);
            }

            if (src_mat.channels() == 1) {
                src_mat_gpu.upload(src_mat, stream);
                src_mat_gpu.copyTo(src_grey_gpu);
            }
            else if (src_mat.channels() == 3) {
                src_mat_gpu.upload(src_mat, stream);
                cv::cuda::cvtColor(src_mat_gpu, src_grey_gpu, CV_BGR2GRAY, 1, stream);
            }
            else if (src_mat.channels() == 4) {
                src_mat_gpu.upload(src_mat, stream);
                cv::cuda::cvtColor(src_mat_gpu, src_grey_gpu, CV_BGRA2GRAY, 1, stream);
            }
            else {
                std::cerr << " Warning: src_mat.channels() is not: 1, 3 or 4. It is = " << src_mat.channels() << " \n";
                return;
            }

        }
        update_cur_bbox_vec(_cur_bbox_vec);

        if (old_gpu_id != gpu_id)
            cv::cuda::setDevice(old_gpu_id);
    }


    std::vector<bbox_t> tracking_flow(cv::Mat dst_mat, bool check_error = true)
    {
        if (sync_PyrLKOpticalFlow_gpu.empty()) {
            std::cout << "sync_PyrLKOpticalFlow_gpu isn't initialized \n";
            return cur_bbox_vec;
        }

        int const old_gpu_id = cv::cuda::getDevice();
        if(old_gpu_id != gpu_id)
            cv::cuda::setDevice(gpu_id);

        if (dst_mat_gpu.cols == 0) {
            dst_mat_gpu = cv::cuda::GpuMat(dst_mat.size(), dst_mat.type());
            dst_grey_gpu = cv::cuda::GpuMat(dst_mat.size(), CV_8UC1);
        }

        //dst_grey_gpu.upload(dst_mat, stream);    // use BGR
        dst_mat_gpu.upload(dst_mat, stream);
        cv::cuda::cvtColor(dst_mat_gpu, dst_grey_gpu, CV_BGR2GRAY, 1, stream);

        if (src_grey_gpu.rows != dst_grey_gpu.rows || src_grey_gpu.cols != dst_grey_gpu.cols) {
            stream.waitForCompletion();
            src_grey_gpu = dst_grey_gpu.clone();
            cv::cuda::setDevice(old_gpu_id);
            return cur_bbox_vec;
        }

        ////sync_PyrLKOpticalFlow_gpu.sparse(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, &err_gpu);    // OpenCV 2.4.x
        sync_PyrLKOpticalFlow_gpu->calc(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, err_gpu, stream);    // OpenCV 3.x

        cv::Mat cur_pts_flow_cpu;
        cur_pts_flow_gpu.download(cur_pts_flow_cpu, stream);

        dst_grey_gpu.copyTo(src_grey_gpu, stream);

        cv::Mat err_cpu, status_cpu;
        err_gpu.download(err_cpu, stream);
        status_gpu.download(status_cpu, stream);

        stream.waitForCompletion();

        std::vector<bbox_t> result_bbox_vec;

        if (err_cpu.cols == cur_bbox_vec.size() && status_cpu.cols == cur_bbox_vec.size())
        {
            for (size_t i = 0; i < cur_bbox_vec.size(); ++i)
            {
                cv::Point2f cur_key_pt = cur_pts_flow_cpu.at<cv::Point2f>(0, i);
                cv::Point2f prev_key_pt = prev_pts_flow_cpu.at<cv::Point2f>(0, i);

                float moved_x = cur_key_pt.x - prev_key_pt.x;
                float moved_y = cur_key_pt.y - prev_key_pt.y;

                if (abs(moved_x) < 100 && abs(moved_y) < 100 && good_bbox_vec_flags[i])
                    if (err_cpu.at<float>(0, i) < flow_error && status_cpu.at<unsigned char>(0, i) != 0 &&
                        ((float)cur_bbox_vec[i].x + moved_x) > 0 && ((float)cur_bbox_vec[i].y + moved_y) > 0)
                    {
                        cur_bbox_vec[i].x += moved_x + 0.5;
                        cur_bbox_vec[i].y += moved_y + 0.5;
                        result_bbox_vec.push_back(cur_bbox_vec[i]);
                    }
                    else good_bbox_vec_flags[i] = false;
                else good_bbox_vec_flags[i] = false;

                //if(!check_error && !good_bbox_vec_flags[i]) result_bbox_vec.push_back(cur_bbox_vec[i]);
            }
        }

        cur_pts_flow_gpu.swap(prev_pts_flow_gpu);
        cur_pts_flow_cpu.copyTo(prev_pts_flow_cpu);

        if (old_gpu_id != gpu_id)
            cv::cuda::setDevice(old_gpu_id);

        return result_bbox_vec;
    }

};

#elif defined(TRACK_OPTFLOW) && defined(OPENCV)

//#include <opencv2/optflow.hpp>
#include <opencv2/video/tracking.hpp>

class Tracker_optflow {
public:
    const int flow_error;


    Tracker_optflow(int win_size = 15, int max_level = 3, int iterations = 8000, int _flow_error = -1) :
        flow_error((_flow_error > 0)? _flow_error:(win_size*4))
    {
        sync_PyrLKOpticalFlow = cv::SparsePyrLKOpticalFlow::create();
        sync_PyrLKOpticalFlow->setWinSize(cv::Size(win_size, win_size));    // 9, 15, 21, 31
        sync_PyrLKOpticalFlow->setMaxLevel(max_level);        // +- 3 pt

    }

    // just to avoid extra allocations
    cv::Mat dst_grey;
    cv::Mat prev_pts_flow, cur_pts_flow;
    cv::Mat status, err;

    cv::Mat src_grey;    // used in both functions
    cv::Ptr<cv::SparsePyrLKOpticalFlow> sync_PyrLKOpticalFlow;

    std::vector<bbox_t> cur_bbox_vec;
    std::vector<bool> good_bbox_vec_flags;

    void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    {
        cur_bbox_vec = _cur_bbox_vec;
        good_bbox_vec_flags = std::vector<bool>(cur_bbox_vec.size(), true);
        cv::Mat prev_pts, cur_pts_flow;

        for (auto &i : cur_bbox_vec) {
            float x_center = (i.x + i.w / 2.0F);
            float y_center = (i.y + i.h / 2.0F);
            prev_pts.push_back(cv::Point2f(x_center, y_center));
        }

        if (prev_pts.rows == 0)
            prev_pts_flow = cv::Mat();
        else
            cv::transpose(prev_pts, prev_pts_flow);
    }


    void update_tracking_flow(cv::Mat new_src_mat, std::vector<bbox_t> _cur_bbox_vec)
    {
        if (new_src_mat.channels() == 1) {
            src_grey = new_src_mat.clone();
        }
        else if (new_src_mat.channels() == 3) {
            cv::cvtColor(new_src_mat, src_grey, CV_BGR2GRAY, 1);
        }
        else if (new_src_mat.channels() == 4) {
            cv::cvtColor(new_src_mat, src_grey, CV_BGRA2GRAY, 1);
        }
        else {
            std::cerr << " Warning: new_src_mat.channels() is not: 1, 3 or 4. It is = " << new_src_mat.channels() << " \n";
            return;
        }
        update_cur_bbox_vec(_cur_bbox_vec);
    }


    std::vector<bbox_t> tracking_flow(cv::Mat new_dst_mat, bool check_error = true)
    {
        if (sync_PyrLKOpticalFlow.empty()) {
            std::cout << "sync_PyrLKOpticalFlow isn't initialized \n";
            return cur_bbox_vec;
        }

        cv::cvtColor(new_dst_mat, dst_grey, CV_BGR2GRAY, 1);

        if (src_grey.rows != dst_grey.rows || src_grey.cols != dst_grey.cols) {
            src_grey = dst_grey.clone();
            //std::cerr << " Warning: src_grey.rows != dst_grey.rows || src_grey.cols != dst_grey.cols \n";
            return cur_bbox_vec;
        }

        if (prev_pts_flow.cols < 1) {
            return cur_bbox_vec;
        }

        ////sync_PyrLKOpticalFlow_gpu.sparse(src_grey_gpu, dst_grey_gpu, prev_pts_flow_gpu, cur_pts_flow_gpu, status_gpu, &err_gpu);    // OpenCV 2.4.x
        sync_PyrLKOpticalFlow->calc(src_grey, dst_grey, prev_pts_flow, cur_pts_flow, status, err);    // OpenCV 3.x

        dst_grey.copyTo(src_grey);

        std::vector<bbox_t> result_bbox_vec;

        if (err.rows == cur_bbox_vec.size() && status.rows == cur_bbox_vec.size())
        {
            for (size_t i = 0; i < cur_bbox_vec.size(); ++i)
            {
                cv::Point2f cur_key_pt = cur_pts_flow.at<cv::Point2f>(0, i);
                cv::Point2f prev_key_pt = prev_pts_flow.at<cv::Point2f>(0, i);

                float moved_x = cur_key_pt.x - prev_key_pt.x;
                float moved_y = cur_key_pt.y - prev_key_pt.y;

                if (abs(moved_x) < 100 && abs(moved_y) < 100 && good_bbox_vec_flags[i])
                    if (err.at<float>(0, i) < flow_error && status.at<unsigned char>(0, i) != 0 &&
                        ((float)cur_bbox_vec[i].x + moved_x) > 0 && ((float)cur_bbox_vec[i].y + moved_y) > 0)
                    {
                        cur_bbox_vec[i].x += moved_x + 0.5;
                        cur_bbox_vec[i].y += moved_y + 0.5;
                        result_bbox_vec.push_back(cur_bbox_vec[i]);
                    }
                    else good_bbox_vec_flags[i] = false;
                else good_bbox_vec_flags[i] = false;

                //if(!check_error && !good_bbox_vec_flags[i]) result_bbox_vec.push_back(cur_bbox_vec[i]);
            }
        }

        prev_pts_flow = cur_pts_flow.clone();

        return result_bbox_vec;
    }

};
#else

class Tracker_optflow {};

#endif    // defined(TRACK_OPTFLOW) && defined(OPENCV)


#ifdef OPENCV

static cv::Scalar obj_id_to_color(int obj_id) {
    int const colors[6][3] = { { 1,0,1 },{ 0,0,1 },{ 0,1,1 },{ 0,1,0 },{ 1,1,0 },{ 1,0,0 } };
    int const offset = obj_id * 123457 % 6;
    int const color_scale = 150 + (obj_id * 123457) % 100;
    cv::Scalar color(colors[offset][0], colors[offset][1], colors[offset][2]);
    color *= color_scale;
    return color;
}

class preview_boxes_t {
    enum { frames_history = 30 };    // how long to keep the history saved

    struct preview_box_track_t {
        unsigned int track_id, obj_id, last_showed_frames_ago;
        bool current_detection;
        bbox_t bbox;
        cv::Mat mat_obj, mat_resized_obj;
        preview_box_track_t() : track_id(0), obj_id(0), last_showed_frames_ago(frames_history), current_detection(false) {}
    };
    std::vector<preview_box_track_t> preview_box_track_id;
    size_t const preview_box_size, bottom_offset;
    bool const one_off_detections;
public:
    preview_boxes_t(size_t _preview_box_size = 100, size_t _bottom_offset = 100, bool _one_off_detections = false) :
        preview_box_size(_preview_box_size), bottom_offset(_bottom_offset), one_off_detections(_one_off_detections)
    {}

    void set(cv::Mat src_mat, std::vector<bbox_t> result_vec)
    {
        size_t const count_preview_boxes = src_mat.cols / preview_box_size;
        if (preview_box_track_id.size() != count_preview_boxes) preview_box_track_id.resize(count_preview_boxes);

        // increment frames history
        for (auto &i : preview_box_track_id)
            i.last_showed_frames_ago = std::min((unsigned)frames_history, i.last_showed_frames_ago + 1);

        // occupy empty boxes
        for (auto &k : result_vec) {
            bool found = false;
            // find the same (track_id)
            for (auto &i : preview_box_track_id) {
                if (i.track_id == k.track_id) {
                    if (!one_off_detections) i.last_showed_frames_ago = 0; // for tracked objects
                    found = true;
                    break;
                }
            }
            if (!found) {
                // find empty box
                for (auto &i : preview_box_track_id) {
                    if (i.last_showed_frames_ago == frames_history) {
                        if (!one_off_detections && k.frames_counter == 0) break; // don't show if obj isn't tracked yet
                        i.track_id = k.track_id;
                        i.obj_id = k.obj_id;
                        i.bbox = k;
                        i.last_showed_frames_ago = 0;
                        break;
                    }
                }
            }
        }

        // draw preview box (from old or current frame)
        for (size_t i = 0; i < preview_box_track_id.size(); ++i)
        {
            // get object image
            cv::Mat dst = preview_box_track_id[i].mat_resized_obj;
            preview_box_track_id[i].current_detection = false;

            for (auto &k : result_vec) {
                if (preview_box_track_id[i].track_id == k.track_id) {
                    if (one_off_detections && preview_box_track_id[i].last_showed_frames_ago > 0) {
                        preview_box_track_id[i].last_showed_frames_ago = frames_history; break;
                    }
                    bbox_t b = k;
                    cv::Rect r(b.x, b.y, b.w, b.h);
                    cv::Rect img_rect(cv::Point2i(0, 0), src_mat.size());
                    cv::Rect rect_roi = r & img_rect;
                    if (rect_roi.width > 1 || rect_roi.height > 1) {
                        cv::Mat roi = src_mat(rect_roi);
                        cv::resize(roi, dst, cv::Size(preview_box_size, preview_box_size), cv::INTER_NEAREST);
                        preview_box_track_id[i].mat_obj = roi.clone();
                        preview_box_track_id[i].mat_resized_obj = dst.clone();
                        preview_box_track_id[i].current_detection = true;
                        preview_box_track_id[i].bbox = k;
                    }
                    break;
                }
            }
        }
    }


    void draw(cv::Mat draw_mat, bool show_small_boxes = false)
    {
        // draw preview box (from old or current frame)
        for (size_t i = 0; i < preview_box_track_id.size(); ++i)
        {
            auto &prev_box = preview_box_track_id[i];

            // draw object image
            cv::Mat dst = prev_box.mat_resized_obj;
            if (prev_box.last_showed_frames_ago < frames_history &&
                dst.size() == cv::Size(preview_box_size, preview_box_size))
            {
                cv::Rect dst_rect_roi(cv::Point2i(i * preview_box_size, draw_mat.rows - bottom_offset), dst.size());
                cv::Mat dst_roi = draw_mat(dst_rect_roi);
                dst.copyTo(dst_roi);

                cv::Scalar color = obj_id_to_color(prev_box.obj_id);
                int thickness = (prev_box.current_detection) ? 5 : 1;
                cv::rectangle(draw_mat, dst_rect_roi, color, thickness);

                unsigned int const track_id = prev_box.track_id;
                std::string track_id_str = (track_id > 0) ? std::to_string(track_id) : "";
                putText(draw_mat, track_id_str, dst_rect_roi.tl() - cv::Point2i(-4, 5), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.9, cv::Scalar(0, 0, 0), 2);

                std::string size_str = std::to_string(prev_box.bbox.w) + "x" + std::to_string(prev_box.bbox.h);
                putText(draw_mat, size_str, dst_rect_roi.tl() + cv::Point2i(0, 12), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(0, 0, 0), 1);

                if (!one_off_detections && prev_box.current_detection) {
                    cv::line(draw_mat, dst_rect_roi.tl() + cv::Point2i(preview_box_size, 0),
                        cv::Point2i(prev_box.bbox.x, prev_box.bbox.y + prev_box.bbox.h),
                        color);
                }

                if (one_off_detections && show_small_boxes) {
                    cv::Rect src_rect_roi(cv::Point2i(prev_box.bbox.x, prev_box.bbox.y),
                        cv::Size(prev_box.bbox.w, prev_box.bbox.h));
                    unsigned int const color_history = (255 * prev_box.last_showed_frames_ago) / frames_history;
                    color = cv::Scalar(255 - 3 * color_history, 255 - 2 * color_history, 255 - 1 * color_history);
                    if (prev_box.mat_obj.size() == src_rect_roi.size()) {
                        prev_box.mat_obj.copyTo(draw_mat(src_rect_roi));
                    }
                    cv::rectangle(draw_mat, src_rect_roi, color, thickness);
                    putText(draw_mat, track_id_str, src_rect_roi.tl() - cv::Point2i(0, 10), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(0, 0, 0), 1);
                }
            }
        }
    }
};


class track_kalman_t
{
    int track_id_counter;
    std::chrono::steady_clock::time_point global_last_time;
    float dT;

public:
    int max_objects;    // max objects for tracking
    int min_frames;     // min frames to consider an object as detected
    const float max_dist;   // max distance (in px) to track with the same ID
    cv::Size img_size;  // max value of x,y,w,h

    struct tst_t {
        int track_id;
        int state_id;
        std::chrono::steady_clock::time_point last_time;
        int detection_count;
        tst_t() : track_id(-1), state_id(-1) {}
    };
    std::vector<tst_t> track_id_state_id_time;
    std::vector<bbox_t> result_vec_pred;

    struct one_kalman_t;
    std::vector<one_kalman_t> kalman_vec;

    struct one_kalman_t
    {
        cv::KalmanFilter kf;
        cv::Mat state;
        cv::Mat meas;
        int measSize, stateSize, contrSize;

        void set_delta_time(float dT) {
            kf.transitionMatrix.at<float>(2) = dT;
            kf.transitionMatrix.at<float>(9) = dT;
        }

        void set(bbox_t box)
        {
            initialize_kalman();

            kf.errorCovPre.at<float>(0) = 1; // px
            kf.errorCovPre.at<float>(7) = 1; // px
            kf.errorCovPre.at<float>(14) = 1;
            kf.errorCovPre.at<float>(21) = 1;
            kf.errorCovPre.at<float>(28) = 1; // px
            kf.errorCovPre.at<float>(35) = 1; // px

            state.at<float>(0) = box.x;
            state.at<float>(1) = box.y;
            state.at<float>(2) = 0;
            state.at<float>(3) = 0;
            state.at<float>(4) = box.w;
            state.at<float>(5) = box.h;
            // <<<< Initialization

            kf.statePost = state;
        }

        // Kalman.correct() calculates: statePost = statePre + gain * (z(k)-measurementMatrix*statePre);
        // corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k))
        void correct(bbox_t box) {
            meas.at<float>(0) = box.x;
            meas.at<float>(1) = box.y;
            meas.at<float>(2) = box.w;
            meas.at<float>(3) = box.h;

            kf.correct(meas);

            bbox_t new_box = predict();
            if (new_box.w == 0 || new_box.h == 0) {
                set(box);
                //std::cerr << " force set(): track_id = " << box.track_id <<
                //    ", x = " << box.x << ", y = " << box.y << ", w = " << box.w << ", h = " << box.h << std::endl;
            }
        }

        // Kalman.predict() calculates: statePre = TransitionMatrix * statePost;
        // predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k)
        bbox_t predict() {
            bbox_t box;
            state = kf.predict();

            box.x = state.at<float>(0);
            box.y = state.at<float>(1);
            box.w = state.at<float>(4);
            box.h = state.at<float>(5);
            return box;
        }

        void initialize_kalman()
        {
            kf = cv::KalmanFilter(stateSize, measSize, contrSize, CV_32F);

            // Transition State Matrix A
            // Note: set dT at each processing step!
            // [ 1 0 dT 0  0 0 ]
            // [ 0 1 0  dT 0 0 ]
            // [ 0 0 1  0  0 0 ]
            // [ 0 0 0  1  0 0 ]
            // [ 0 0 0  0  1 0 ]
            // [ 0 0 0  0  0 1 ]
            cv::setIdentity(kf.transitionMatrix);

            // Measure Matrix H
            // [ 1 0 0 0 0 0 ]
            // [ 0 1 0 0 0 0 ]
            // [ 0 0 0 0 1 0 ]
            // [ 0 0 0 0 0 1 ]
            kf.measurementMatrix = cv::Mat::zeros(measSize, stateSize, CV_32F);
            kf.measurementMatrix.at<float>(0) = 1.0f;
            kf.measurementMatrix.at<float>(7) = 1.0f;
            kf.measurementMatrix.at<float>(16) = 1.0f;
            kf.measurementMatrix.at<float>(23) = 1.0f;

            // Process Noise Covariance Matrix Q - result smoother with lower values (1e-2)
            // [ Ex   0   0     0     0    0  ]
            // [ 0    Ey  0     0     0    0  ]
            // [ 0    0   Ev_x  0     0    0  ]
            // [ 0    0   0     Ev_y  0    0  ]
            // [ 0    0   0     0     Ew   0  ]
            // [ 0    0   0     0     0    Eh ]
            //cv::setIdentity(kf.processNoiseCov, cv::Scalar(1e-3));
            kf.processNoiseCov.at<float>(0) = 1e-2;
            kf.processNoiseCov.at<float>(7) = 1e-2;
            kf.processNoiseCov.at<float>(14) = 1e-2;// 5.0f;
            kf.processNoiseCov.at<float>(21) = 1e-2;// 5.0f;
            kf.processNoiseCov.at<float>(28) = 5e-3;
            kf.processNoiseCov.at<float>(35) = 5e-3;

            // Measures Noise Covariance Matrix R - result smoother with higher values (1e-1)
            cv::setIdentity(kf.measurementNoiseCov, cv::Scalar(1e-1));

            //cv::setIdentity(kf.errorCovPost, cv::Scalar::all(1e-2));
            // <<<< Kalman Filter

            set_delta_time(0);
        }


        one_kalman_t(int _stateSize = 6, int _measSize = 4, int _contrSize = 0) :
            kf(_stateSize, _measSize, _contrSize, CV_32F), measSize(_measSize), stateSize(_stateSize), contrSize(_contrSize)
        {
            state = cv::Mat(stateSize, 1, CV_32F);  // [x,y,v_x,v_y,w,h]
            meas = cv::Mat(measSize, 1, CV_32F);    // [z_x,z_y,z_w,z_h]
            //cv::Mat procNoise(stateSize, 1, type)
            // [E_x,E_y,E_v_x,E_v_y,E_w,E_h]

            initialize_kalman();
        }
    };
    // ------------------------------------------



    track_kalman_t(int _max_objects = 1000, int _min_frames = 3, float _max_dist = 40, cv::Size _img_size = cv::Size(10000, 10000)) :
        max_objects(_max_objects), min_frames(_min_frames), max_dist(_max_dist), img_size(_img_size),
        track_id_counter(0)
    {
        kalman_vec.resize(max_objects);
        track_id_state_id_time.resize(max_objects);
        result_vec_pred.resize(max_objects);
    }

    float calc_dt() {
        dT = std::chrono::duration<double>(std::chrono::steady_clock::now() - global_last_time).count();
        return dT;
    }

    static float get_distance(float src_x, float src_y, float dst_x, float dst_y) {
        return sqrtf((src_x - dst_x)*(src_x - dst_x) + (src_y - dst_y)*(src_y - dst_y));
    }

    void clear_old_states() {
        // clear old bboxes
        for (size_t state_id = 0; state_id < track_id_state_id_time.size(); ++state_id)
        {
            float time_sec = std::chrono::duration<double>(std::chrono::steady_clock::now() - track_id_state_id_time[state_id].last_time).count();
            float time_wait = 0.5;    // 0.5 second
            if (track_id_state_id_time[state_id].track_id > -1)
            {
                if ((result_vec_pred[state_id].x > img_size.width) ||
                    (result_vec_pred[state_id].y > img_size.height))
                {
                    track_id_state_id_time[state_id].track_id = -1;
                }

                if (time_sec >= time_wait || track_id_state_id_time[state_id].detection_count < 0) {
                    //std::cerr << " remove track_id = " << track_id_state_id_time[state_id].track_id << ", state_id = " << state_id << std::endl;
                    track_id_state_id_time[state_id].track_id = -1; // remove bbox
                }
            }
        }
    }

    tst_t get_state_id(bbox_t find_box, std::vector<bool> &busy_vec)
    {
        tst_t tst;
        tst.state_id = -1;

        float min_dist = std::numeric_limits<float>::max();

        for (size_t i = 0; i < max_objects; ++i)
        {
            if (track_id_state_id_time[i].track_id > -1 && result_vec_pred[i].obj_id == find_box.obj_id && busy_vec[i] == false)
            {
                bbox_t pred_box = result_vec_pred[i];

                float dist = get_distance(pred_box.x, pred_box.y, find_box.x, find_box.y);

                float movement_dist = std::max(max_dist, static_cast<float>(std::max(pred_box.w, pred_box.h)) );

                if ((dist < movement_dist) && (dist < min_dist)) {
                    min_dist = dist;
                    tst.state_id = i;
                }
            }
        }

        if (tst.state_id > -1) {
            track_id_state_id_time[tst.state_id].last_time = std::chrono::steady_clock::now();
            track_id_state_id_time[tst.state_id].detection_count = std::max(track_id_state_id_time[tst.state_id].detection_count + 2, 10);
            tst = track_id_state_id_time[tst.state_id];
            busy_vec[tst.state_id] = true;
        }
        else {
            //std::cerr << " Didn't find: obj_id = " << find_bo
Download .txt
gitextract_t6v5ojlz/

├── LICENSE
├── README.md
├── client/
│   ├── darknet_client/
│   │   ├── Makefile
│   │   ├── darknet_client.vcxproj
│   │   ├── darknet_client.vcxproj.filters
│   │   ├── darknet_client.vcxproj.user
│   │   └── src/
│   │       ├── args.cpp
│   │       ├── args.hpp
│   │       ├── base64.cpp
│   │       ├── base64.hpp
│   │       ├── frame.cpp
│   │       ├── frame.hpp
│   │       ├── main.cpp
│   │       ├── mem_pool.cpp
│   │       ├── mem_pool.hpp
│   │       ├── share_queue.hpp
│   │       ├── util.cpp
│   │       └── util.hpp
│   └── darknet_client.sln
├── server/
│   ├── Makefile
│   ├── action.py
│   ├── action_train.py
│   ├── cfg/
│   │   └── openpose.cfg
│   ├── src/
│   │   ├── DetectorInterface.hpp
│   │   ├── Hungarian.cpp
│   │   ├── Hungarian.h
│   │   ├── KalmanTracker.cpp
│   │   ├── KalmanTracker.h
│   │   ├── Tracker.cpp
│   │   ├── Tracker.hpp
│   │   ├── args.cpp
│   │   ├── args.hpp
│   │   ├── base64.cpp
│   │   ├── base64.h
│   │   ├── frame.cpp
│   │   ├── frame.hpp
│   │   ├── mem_pool.cpp
│   │   ├── mem_pool.hpp
│   │   ├── people.cpp
│   │   ├── people.hpp
│   │   ├── pose_detector.cpp
│   │   ├── pose_detector.hpp
│   │   ├── share_queue.h
│   │   ├── sink.cpp
│   │   ├── ventilator.cpp
│   │   ├── worker.cpp
│   │   └── yolo_v2_class.hpp
│   ├── train/
│   │   └── train.txt
│   └── weights/
│       ├── action.h5
│       └── action2.h5
└── util/
    ├── 171204_pose3_info.txt
    ├── 171204_pose5_info.txt
    ├── 171204_pose6_info.txt
    ├── concat.py
    └── cut.py
Download .txt
SYMBOL INDEX (152 symbols across 30 files)

FILE: client/darknet_client/src/args.cpp
  function del_arg (line 8) | void del_arg(int argc, char **argv, int index)
  function find_arg (line 15) | int find_arg(int argc, char* argv[], const char *arg)
  function find_int_arg (line 28) | int find_int_arg(int argc, char **argv, const char *arg, int def)
  function find_float_arg (line 43) | float find_float_arg(int argc, char **argv, const char *arg, float def)

FILE: client/darknet_client/src/base64.cpp
  function is_base64 (line 31) | static inline bool is_base64(unsigned char c) {
  function base64_encode (line 35) | std::string base64_encode(unsigned char const* bytes_to_encode, unsigned...
  function base64_decode (line 77) | std::string base64_decode(std::string const& encoded_string) {

FILE: client/darknet_client/src/frame.cpp
  function Frame (line 26) | Frame Frame_pool::alloc_frame(void) {
  function frame_to_json (line 45) | int frame_to_json(void* buf, const Frame& frame) {
  function json_to_frame (line 57) | void json_to_frame(void* buf, Frame& frame) {

FILE: client/darknet_client/src/frame.hpp
  type Frame (line 5) | struct Frame {
  class Frame_pool (line 18) | class Frame_pool

FILE: client/darknet_client/src/main.cpp
  class ComparePair (line 46) | class ComparePair
  function sig_handler (line 88) | void sig_handler(int s)
  function main (line 93) | int main(int argc, char *argv[])
  function fetch_thread (line 236) | void fetch_thread(void) {
  function capture_thread (line 272) | void capture_thread(void) {
  function recv_thread (line 327) | void recv_thread(void) {
  function input_show_thread (line 358) | void input_show_thread(void) {
  function output_show_thread (line 395) | void output_show_thread(void) {

FILE: client/darknet_client/src/mem_pool.cpp
  type _Unit (line 19) | struct _Unit
  type _Unit (line 28) | struct _Unit
  type _Unit (line 28) | struct _Unit
  type _Unit (line 28) | struct _Unit
  type _Unit (line 77) | struct _Unit
  type _Unit (line 92) | struct _Unit
  type _Unit (line 112) | struct _Unit
  type _Unit (line 112) | struct _Unit
  type _Unit (line 112) | struct _Unit

FILE: client/darknet_client/src/mem_pool.hpp
  class CMemPool (line 4) | class CMemPool
    type _Unit (line 8) | struct _Unit                     //The type of the node of linkedlist.
      type _Unit (line 10) | struct _Unit
    type _Unit (line 16) | struct _Unit
      type _Unit (line 10) | struct _Unit
    type _Unit (line 17) | struct _Unit
      type _Unit (line 10) | struct _Unit

FILE: client/darknet_client/src/share_queue.hpp
  class SharedQueue (line 10) | class SharedQueue
  function T (line 38) | T& SharedQueue<T>::front()

FILE: client/darknet_client/src/util.cpp
  function str_to_int (line 4) | int str_to_int(const char* str, int len)

FILE: server/action.py
  function load_X (line 34) | def load_X(msg):

FILE: server/action_train.py
  function load_X (line 21) | def load_X(X_path):
  function load_y (line 34) | def load_y(y_path):

FILE: server/src/DetectorInterface.hpp
  class DetectorInterface (line 6) | class DetectorInterface {

FILE: server/src/Hungarian.h
  function class (line 21) | class HungarianAlgorithm

FILE: server/src/KalmanTracker.cpp
  function StateType (line 42) | StateType KalmanTracker::predict()
  function StateType (line 79) | StateType KalmanTracker::get_state()
  function StateType (line 87) | StateType KalmanTracker::get_rect_xysr(float cx, float cy, float s, floa...

FILE: server/src/KalmanTracker.h
  function class (line 18) | class KalmanTracker

FILE: server/src/Tracker.hpp
  type TrackingBox (line 6) | struct TrackingBox
  class Tracker (line 14) | class Tracker

FILE: server/src/args.cpp
  function del_arg (line 8) | void del_arg(int argc, char **argv, int index)
  function find_arg (line 15) | int find_arg(int argc, char* argv[], const char *arg)
  function find_int_arg (line 28) | int find_int_arg(int argc, char **argv, const char *arg, int def)
  function find_float_arg (line 43) | float find_float_arg(int argc, char **argv, const char *arg, float def)

FILE: server/src/base64.cpp
  function is_base64 (line 41) | static inline bool is_base64(unsigned char c) {
  function base64_encode (line 45) | std::string base64_encode(unsigned char const* bytes_to_encode, unsigned...
  function base64_decode (line 87) | std::string base64_decode(std::string const& encoded_string) {

FILE: server/src/frame.cpp
  function Frame (line 29) | Frame Frame_pool::alloc_frame(void) {
  function frame_to_json (line 50) | int frame_to_json(void* buf, const Frame& frame) {
  function json_to_frame (line 63) | void json_to_frame(void* buf, Frame& frame) {

FILE: server/src/frame.hpp
  type Frame (line 6) | struct Frame {
  class Frame_pool (line 19) | class Frame_pool

FILE: server/src/mem_pool.cpp
  type _Unit (line 20) | struct _Unit
  type _Unit (line 29) | struct _Unit
  type _Unit (line 29) | struct _Unit
  type _Unit (line 29) | struct _Unit
  type _Unit (line 78) | struct _Unit
  type _Unit (line 93) | struct _Unit
  type _Unit (line 113) | struct _Unit
  type _Unit (line 113) | struct _Unit
  type _Unit (line 113) | struct _Unit

FILE: server/src/mem_pool.hpp
  class CMemPool (line 4) | class CMemPool
    type _Unit (line 8) | struct _Unit                     //The type of the node of linkedlist.
      type _Unit (line 10) | struct _Unit
    type _Unit (line 16) | struct _Unit
      type _Unit (line 10) | struct _Unit
    type _Unit (line 17) | struct _Unit
      type _Unit (line 10) | struct _Unit

FILE: server/src/people.cpp
  function Person (line 233) | Person& Person::operator=(const Person& p)

FILE: server/src/people.hpp
  function intRoundUp (line 17) | inline int intRoundUp(const T a)
  class Person (line 22) | class Person
    type Joint (line 41) | struct Joint {
    type Change (line 46) | struct Change {
    method Person (line 69) | Person() : enemy(nullptr), overlap_count(0), track_id(UNTRACK), action...
    method set_id (line 74) | void set_id(int id) { track_id = id; }
    method set_enemy (line 76) | void set_enemy(Person* p) { enemy = p; }
    method get_id (line 80) | inline int get_id(void) const { return track_id; }
    method get_action (line 81) | inline int get_action(void) const { return action; }
    method Person (line 86) | inline const Person* get_enemy(void) const { return enemy; }
    method is_danger (line 91) | inline bool is_danger(void) const { return (action == PUNCH || action ...
  class People (line 106) | class People
    method People (line 115) | People() {}
    method People (line 117) | People(std::vector<float> _keypoints, std::vector<int> _keyshape, floa...
    method get_person_num (line 120) | inline int get_person_num(void) const { return keyshape[0]; }
    method serialize (line 127) | void serialize(Archive & ar, const unsigned int version)

FILE: server/src/pose_detector.cpp
  function intRound (line 14) | inline int intRound(const T a)
  function T (line 20) | inline T fastMin(const T a, const T b)
  function Mat (line 414) | Mat PoseDetector::create_netsize_im

FILE: server/src/pose_detector.hpp
  class PoseDetector (line 12) | class PoseDetector : public Detector, public DetectorInterface
    method People (line 23) | inline People* get_people(void) { return det_people; }

FILE: server/src/sink.cpp
  function sig_handler (line 32) | void sig_handler(int s)
  function main (line 92) | int main()

FILE: server/src/ventilator.cpp
  function sig_handler (line 21) | void sig_handler(int s)
  function main (line 70) | int main()

FILE: server/src/worker.cpp
  function sig_handler (line 32) | void sig_handler(int s)
  function main (line 83) | int main(int argc, char *argv[])

FILE: server/src/yolo_v2_class.hpp
  type bbox_t (line 22) | struct bbox_t {
  type image_t (line 31) | struct image_t {
  type bbox_t_container (line 38) | struct bbox_t_container {
  class Detector (line 70) | class Detector {
    method detect_resized (line 100) | std::vector<bbox_t> detect_resized(image_t img, int init_w, int init_h...
    method detect (line 111) | std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_m...
    method mat_to_image_resize (line 119) | std::shared_ptr<image_t> mat_to_image_resize(cv::Mat mat) const
    method mat_to_image (line 133) | static std::shared_ptr<image_t> mat_to_image(cv::Mat img_src)
    method image_t (line 147) | static image_t mat_to_image_custom(cv::Mat mat)
    method image_t (line 165) | static image_t make_empty_image(int w, int h, int c)
    method image_t (line 175) | static image_t make_image_custom(int w, int h, int c)
    method send_json_http (line 186) | bool send_json_http(std::vector<bbox_t> cur_bbox_vec, std::vector<std:...
  class Tracker_optflow (line 240) | class Tracker_optflow {
    method Tracker_optflow (line 247) | Tracker_optflow(int _gpu_id = 0, int win_size = 15, int max_level = 3,...
    method update_cur_bbox_vec (line 278) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 307) | void update_tracking_flow(cv::Mat src_mat, std::vector<bbox_t> _cur_bb...
    method tracking_flow (line 344) | std::vector<bbox_t> tracking_flow(cv::Mat dst_mat, bool check_error = ...
    method Tracker_optflow (line 433) | Tracker_optflow(int win_size = 15, int max_level = 3, int iterations =...
    method update_cur_bbox_vec (line 453) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 472) | void update_tracking_flow(cv::Mat new_src_mat, std::vector<bbox_t> _cu...
    method tracking_flow (line 491) | std::vector<bbox_t> tracking_flow(cv::Mat new_dst_mat, bool check_erro...
  class Tracker_optflow (line 428) | class Tracker_optflow {
    method Tracker_optflow (line 247) | Tracker_optflow(int _gpu_id = 0, int win_size = 15, int max_level = 3,...
    method update_cur_bbox_vec (line 278) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 307) | void update_tracking_flow(cv::Mat src_mat, std::vector<bbox_t> _cur_bb...
    method tracking_flow (line 344) | std::vector<bbox_t> tracking_flow(cv::Mat dst_mat, bool check_error = ...
    method Tracker_optflow (line 433) | Tracker_optflow(int win_size = 15, int max_level = 3, int iterations =...
    method update_cur_bbox_vec (line 453) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 472) | void update_tracking_flow(cv::Mat new_src_mat, std::vector<bbox_t> _cu...
    method tracking_flow (line 491) | std::vector<bbox_t> tracking_flow(cv::Mat new_dst_mat, bool check_erro...
  class Tracker_optflow (line 550) | class Tracker_optflow {}
    method Tracker_optflow (line 247) | Tracker_optflow(int _gpu_id = 0, int win_size = 15, int max_level = 3,...
    method update_cur_bbox_vec (line 278) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 307) | void update_tracking_flow(cv::Mat src_mat, std::vector<bbox_t> _cur_bb...
    method tracking_flow (line 344) | std::vector<bbox_t> tracking_flow(cv::Mat dst_mat, bool check_error = ...
    method Tracker_optflow (line 433) | Tracker_optflow(int win_size = 15, int max_level = 3, int iterations =...
    method update_cur_bbox_vec (line 453) | void update_cur_bbox_vec(std::vector<bbox_t> _cur_bbox_vec)
    method update_tracking_flow (line 472) | void update_tracking_flow(cv::Mat new_src_mat, std::vector<bbox_t> _cu...
    method tracking_flow (line 491) | std::vector<bbox_t> tracking_flow(cv::Mat new_dst_mat, bool check_erro...
  function obj_id_to_color (line 557) | static cv::Scalar obj_id_to_color(int obj_id) {
  class preview_boxes_t (line 566) | class preview_boxes_t {
    type preview_box_track_t (line 569) | struct preview_box_track_t {
      method preview_box_track_t (line 574) | preview_box_track_t() : track_id(0), obj_id(0), last_showed_frames_a...
    method preview_boxes_t (line 580) | preview_boxes_t(size_t _preview_box_size = 100, size_t _bottom_offset ...
    method set (line 584) | void set(cv::Mat src_mat, std::vector<bbox_t> result_vec)
    method draw (line 650) | void draw(cv::Mat draw_mat, bool show_small_boxes = false)
  class track_kalman_t (line 700) | class track_kalman_t
    type tst_t (line 712) | struct tst_t {
      method tst_t (line 717) | tst_t() : track_id(-1), state_id(-1) {}
    type one_kalman_t (line 722) | struct one_kalman_t
      method set_delta_time (line 732) | void set_delta_time(float dT) {
      method set (line 737) | void set(bbox_t box)
      method correct (line 761) | void correct(bbox_t box) {
      method bbox_t (line 779) | bbox_t predict() {
      method initialize_kalman (line 790) | void initialize_kalman()
      method one_kalman_t (line 840) | one_kalman_t(int _stateSize = 6, int _measSize = 4, int _contrSize =...
    type one_kalman_t (line 725) | struct one_kalman_t
      method set_delta_time (line 732) | void set_delta_time(float dT) {
      method set (line 737) | void set(bbox_t box)
      method correct (line 761) | void correct(bbox_t box) {
      method bbox_t (line 779) | bbox_t predict() {
      method initialize_kalman (line 790) | void initialize_kalman()
      method one_kalman_t (line 840) | one_kalman_t(int _stateSize = 6, int _measSize = 4, int _contrSize =...
    method track_kalman_t (line 855) | track_kalman_t(int _max_objects = 1000, int _min_frames = 3, float _ma...
    method calc_dt (line 864) | float calc_dt() {
    method get_distance (line 869) | static float get_distance(float src_x, float src_y, float dst_x, float...
    method clear_old_states (line 873) | void clear_old_states() {
    method tst_t (line 895) | tst_t get_state_id(bbox_t find_box, std::vector<bool> &busy_vec)
      method tst_t (line 717) | tst_t() : track_id(-1), state_id(-1) {}
    method tst_t (line 933) | tst_t new_state_id(std::vector<bool> &busy_vec)
      method tst_t (line 717) | tst_t() : track_id(-1), state_id(-1) {}
    method find_state_ids (line 951) | std::vector<tst_t> find_state_ids(std::vector<bbox_t> result_vec)
    method predict (line 987) | std::vector<bbox_t> predict()
    method correct (line 1022) | std::vector<bbox_t> correct(std::vector<bbox_t> result_vec)
Condensed preview — 55 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (213K chars).
[
  {
    "path": "LICENSE",
    "chars": 1068,
    "preview": "MIT License\n\nCopyright (c) 2019 Seungsu Lim\n\nPermission is hereby granted, free of charge, to any person obtaining a cop"
  },
  {
    "path": "README.md",
    "chars": 12698,
    "preview": "### Overview\nReal time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition. \n\nThis project is based o"
  },
  {
    "path": "client/darknet_client/Makefile",
    "chars": 811,
    "preview": "DEBUG = 1\n\nCPP = g++\nCOMMON = -DOPENCV\nCXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV\nLDFLAGS = -lstdc++ -lpthread -lzmq -l"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj",
    "chars": 7640,
    "preview": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"15.0\" xmlns=\"http://schemas.microso"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj.filters",
    "chars": 1897,
    "preview": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuil"
  },
  {
    "path": "client/darknet_client/darknet_client.vcxproj.user",
    "chars": 615,
    "preview": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"15.0\" xmlns=\"http://schemas.microsoft.com/developer/msbui"
  },
  {
    "path": "client/darknet_client/src/args.cpp",
    "chars": 1429,
    "preview": "// https://github.com/pjreddie/template\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"args.hpp\"\n"
  },
  {
    "path": "client/darknet_client/src/args.hpp",
    "chars": 358,
    "preview": "// https://github.com/pjreddie/template\n\n#ifndef ARGS_H\n#define ARGS_H\n\nint find_arg(int argc, char* argv[], const char "
  },
  {
    "path": "client/darknet_client/src/base64.cpp",
    "chars": 3600,
    "preview": "/*\n   base64.cpp and base64.h\n   base64 encoding and decoding with C++.\n   Version: 1.01.00\n   Copyright (C) 2004-2017 "
  },
  {
    "path": "client/darknet_client/src/base64.hpp",
    "chars": 374,
    "preview": "//\n//  base64 encoding and decoding with C++.\n//  Version: 1.01.00\n//\n\n#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E"
  },
  {
    "path": "client/darknet_client/src/frame.cpp",
    "chars": 2782,
    "preview": "#include <sstream>\t// for stringstream\n#include <json-c/json.h>\t// for json\n#include <cstring> // for memcpy\n#include \"f"
  },
  {
    "path": "client/darknet_client/src/frame.hpp",
    "chars": 725,
    "preview": "#ifndef __FRAME_HPP\n#define __FRAME_HPP\n#include \"mem_pool.hpp\"\n\nstruct Frame {\n  int seq_len;\n  int msg_len;\n  int det_"
  },
  {
    "path": "client/darknet_client/src/main.cpp",
    "chars": 11294,
    "preview": "#include <zmq.h>\n#include <iostream>\n#include <thread>\n#include <queue>\n#include <chrono>\n#include <string>\n#include <fs"
  },
  {
    "path": "client/darknet_client/src/mem_pool.cpp",
    "chars": 3533,
    "preview": "#include <stdio.h>\n#include <stdlib.h>\n#include \"mem_pool.hpp\"\n/*======================================================="
  },
  {
    "path": "client/darknet_client/src/mem_pool.hpp",
    "chars": 1101,
    "preview": "#ifndef __MEMPOOL_H__\n#define __MEMPOOL_H__\n// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how"
  },
  {
    "path": "client/darknet_client/src/share_queue.hpp",
    "chars": 1697,
    "preview": "#ifndef __SHARE_QUEUE_HPP\n#define __SHARE_QUEUE_HPP\n\n// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not"
  },
  {
    "path": "client/darknet_client/src/util.cpp",
    "chars": 187,
    "preview": "#include \"util.hpp\"\n\n// utility\nint str_to_int(const char* str, int len)\n{\n  int i;\n  int ret = 0;\n  for (i = 0; i < len"
  },
  {
    "path": "client/darknet_client/src/util.hpp",
    "chars": 88,
    "preview": "#ifndef __UTIL_HPP\n#define __UTIL_HPP\n\nint str_to_int(const char* str, int len);\n\n#endif"
  },
  {
    "path": "client/darknet_client.sln",
    "chars": 1422,
    "preview": "\nMicrosoft Visual Studio Solution File, Format Version 12.00\n# Visual Studio 15\nVisualStudioVersion = 15.0.28307.852\nMi"
  },
  {
    "path": "server/Makefile",
    "chars": 1399,
    "preview": "DEBUG = 1\n\nCPP = g++\nCOMMON = -DOPENCV\nCXXFLAGS = -g -Wall -O2 -std=c++11 -DOPENCV\nLDFLAGS = -lstdc++ -lpthread -lzmq -l"
  },
  {
    "path": "server/action.py",
    "chars": 1688,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport zmq\nimport io\nimport time\nfrom tensorflow import keras\n\nfrom tensorflo"
  },
  {
    "path": "server/action_train.py",
    "chars": 2864,
    "preview": "import tensorflow as tf\nimport numpy as np\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nconfig = tf."
  },
  {
    "path": "server/cfg/openpose.cfg",
    "chars": 8329,
    "preview": "[net]\nwidth=200\nheight=200\nchannels=3\n\n[convolutional]\nbatch_normalize=0\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=rel"
  },
  {
    "path": "server/src/DetectorInterface.hpp",
    "chars": 281,
    "preview": "#ifndef __DET_INTERFACE\n#define __DET_INTERFACE\n#include <string>\n#include <opencv2/opencv.hpp>\n\nclass DetectorInterface"
  },
  {
    "path": "server/src/Hungarian.cpp",
    "chars": 12094,
    "preview": "///////////////////////////////////////////////////////////////////////////////\n// Hungarian.cpp: Implementation file fo"
  },
  {
    "path": "server/src/Hungarian.h",
    "chars": 2099,
    "preview": "///////////////////////////////////////////////////////////////////////////////\n// Hungarian.h: Header file for Class Hu"
  },
  {
    "path": "server/src/KalmanTracker.cpp",
    "chars": 4487,
    "preview": "///////////////////////////////////////////////////////////////////////////////\n// KalmanTracker.cpp: KalmanTracker Clas"
  },
  {
    "path": "server/src/KalmanTracker.h",
    "chars": 1284,
    "preview": "///////////////////////////////////////////////////////////////////////////////\n// KalmanTracker.h: KalmanTracker Class "
  },
  {
    "path": "server/src/Tracker.cpp",
    "chars": 5328,
    "preview": "#include <vector>\n#include <set>\n#include <iterator>\n#include <iostream>\n#include \"Hungarian.h\"\n#include \"Tracker.hpp\"\n\n"
  },
  {
    "path": "server/src/Tracker.hpp",
    "chars": 955,
    "preview": "#pragma once\n#include \"KalmanTracker.h\"\n#include <set>\n#include \"people.hpp\"\n\ntypedef struct TrackingBox\n{\n  int frame;\n"
  },
  {
    "path": "server/src/args.cpp",
    "chars": 1429,
    "preview": "// https://github.com/pjreddie/template\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"args.hpp\"\n"
  },
  {
    "path": "server/src/args.hpp",
    "chars": 358,
    "preview": "// https://github.com/pjreddie/template\n\n#ifndef ARGS_H\n#define ARGS_H\n\nint find_arg(int argc, char* argv[], const char "
  },
  {
    "path": "server/src/base64.cpp",
    "chars": 3656,
    "preview": "/* \n   base64.cpp and base64.h\n\n   base64 encoding and decoding with C++.\n\n   Version: 1.01.00\n\n   Copyright (C) 2004-20"
  },
  {
    "path": "server/src/base64.h",
    "chars": 376,
    "preview": "//\n//  base64 encoding and decoding with C++.\n//  Version: 1.01.00\n//\n\n#ifndef BASE64_H_C0CE2A47_D10E_42C9_A27C_C883944E"
  },
  {
    "path": "server/src/frame.cpp",
    "chars": 2747,
    "preview": "#include <string>\n#include <sstream>\n#include <cstring>\n#include \"frame.hpp\"\n#include \"json.h\"\n#include \"base64.h\"\n\nusin"
  },
  {
    "path": "server/src/frame.hpp",
    "chars": 751,
    "preview": "#ifndef __FRAME_H\n#define __FRAME_H\n\n#include \"mem_pool.hpp\"\n\nstruct Frame {\n  int seq_len;\n  int msg_len;\n  int det_len"
  },
  {
    "path": "server/src/mem_pool.cpp",
    "chars": 3762,
    "preview": "#include <stdio.h>\n#include <stdlib.h>\n#include \"mem_pool.hpp\"\n\n/*======================================================"
  },
  {
    "path": "server/src/mem_pool.hpp",
    "chars": 1139,
    "preview": "#ifndef __MEMPOOL_H__\n#define __MEMPOOL_H__\n// https://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how"
  },
  {
    "path": "server/src/people.cpp",
    "chars": 9749,
    "preview": "#include <boost/archive/text_oarchive.hpp>\n#include <boost/archive/text_iarchive.hpp>\n#include <boost/serialization/vect"
  },
  {
    "path": "server/src/people.hpp",
    "chars": 3436,
    "preview": "#ifndef __PEOPLE\n#define __PEOPLE\n#include <boost/archive/text_oarchive.hpp>\n#include <boost/archive/text_iarchive.hpp>\n"
  },
  {
    "path": "server/src/pose_detector.cpp",
    "chars": 17256,
    "preview": "#include <iostream>\n#include <vector>\nusing namespace std;\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/hi"
  },
  {
    "path": "server/src/pose_detector.hpp",
    "chars": 1462,
    "preview": "#ifndef __POSE_DETECTOR\n#define __POSE_DETECTOR\n#include <vector>\n#include <string>\n#include <opencv2/core/core.hpp>\n#in"
  },
  {
    "path": "server/src/share_queue.h",
    "chars": 1614,
    "preview": "#pragma once\n// https://stackoverflow.com/questions/36762248/why-is-stdqueue-not-thread-safe\n#include <deque>\n#include <"
  },
  {
    "path": "server/src/sink.cpp",
    "chars": 8851,
    "preview": "#include <zmq.h>\n#include <stdio.h>\n#include <iostream>\n#include <sstream>\n#include <fstream>\n#include <string>\n#include"
  },
  {
    "path": "server/src/ventilator.cpp",
    "chars": 2305,
    "preview": "#include <zmq.h>\n#include <iostream>\n#include <cassert>\n#include <pthread.h>\n#include <csignal>\n#include \"share_queue.h\""
  },
  {
    "path": "server/src/worker.cpp",
    "chars": 5119,
    "preview": "#include <zmq.h>\n#include <iostream>\n#include <sstream>\n#include <string>\n#include <vector>\n#include <memory>\n#include <"
  },
  {
    "path": "server/src/yolo_v2_class.hpp",
    "chars": 39589,
    "preview": "#ifndef YOLO_V2_CLASS_HPP\n#define YOLO_V2_CLASS_HPP\n\n#ifndef LIB_API\n#ifdef LIB_EXPORTS\n#if defined(_MSC_VER)\n#define LI"
  },
  {
    "path": "server/train/train.txt",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "util/171204_pose3_info.txt",
    "chars": 133,
    "preview": "0, 0, 10\n1, 10, 13\n0, 20, 25\n3, 37, 44\n3, 56, 60\n0, 116, 136\n1, 147, 150\n1, 162, 165\n3, 192, 199\n3, 211, 215\n0, 272, 302"
  },
  {
    "path": "util/171204_pose5_info.txt",
    "chars": 421,
    "preview": "1, 6, 9\n0, 17, 23\n3, 33, 41\n3, 53, 56\n0, 114, 144\n1, 145, 148\n1, 160, 164\n0, 173, 177\n3, 188, 195\n3, 208, 211\n0, 268, 29"
  },
  {
    "path": "util/171204_pose6_info.txt",
    "chars": 349,
    "preview": "0, 1, 10\n1, 11, 15\n0, 23, 28\n3, 39, 46\n3, 58, 60\n0, 119, 149\n1, 150, 152\n1, 165, 170\n0, 178, 182\n3, 195, 211\n3, 213, 217"
  },
  {
    "path": "util/concat.py",
    "chars": 787,
    "preview": "'''\nthis python script for MHAD dataset\nTo run this script is required ffmpeg-python(https://github.com/kkroening/ffmpeg"
  },
  {
    "path": "util/cut.py",
    "chars": 1308,
    "preview": "'''\nthis python script for CMU panoptic dataset\nTo run this script is required ffmpeg-python(https://github.com/kkroenin"
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the imsoo/fight_detection GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 55 files (196.0 KB), approximately 59.9k tokens, and a symbol index with 152 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!