[
  {
    "path": ".gitignore",
    "content": "*.suo\n*.sdf\n*.user\n*.opensdf\n*.txt\n*.idea\n!CMakeLists.txt\n\nbuild\nipch\nDebug\nRelease\nx64\ndata/DetectorData\ndata/FERData"
  },
  {
    "path": "FaceCept3D/3dheadposeestimation.sln",
    "content": "﻿\nMicrosoft Visual Studio Solution File, Format Version 12.00\n# Visual Studio 2013\nVisualStudioVersion = 12.0.21005.1\nMinimumVisualStudioVersion = 10.0.40219.1\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"HeadPoseEstimationFramework\", \"HeadPoseEstimationFramework\\HeadPoseEstimationFramework.vcxproj\", \"{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}\"\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"TemplateCreator\", \"TemplateCreator\\TemplateCreator.vcxproj\", \"{39EBEEC1-29C0-48BA-9858-E593CAEE7757}\"\n\tProjectSection(ProjectDependencies) = postProject\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} = {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}\n\tEndProjectSection\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"TemplateTracker\", \"TemplateTracker\\TemplateTracker.vcxproj\", \"{68A9917E-A682-4013-8D30-389AB45A9E0D}\"\n\tProjectSection(ProjectDependencies) = postProject\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} = {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}\n\tEndProjectSection\nEndProject\nGlobal\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\n\t\tDebug|Mixed Platforms = Debug|Mixed Platforms\n\t\tDebug|Win32 = Debug|Win32\n\t\tDebug|x64 = Debug|x64\n\t\tRelease|Mixed Platforms = Release|Mixed Platforms\n\t\tRelease|Win32 = Release|Win32\n\t\tRelease|x64 = Release|x64\n\tEndGlobalSection\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|Mixed Platforms.ActiveCfg = Debug|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|Win32.ActiveCfg = Debug|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|x64.Build.0 = Debug|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|Mixed Platforms.ActiveCfg = Release|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|Win32.ActiveCfg = Release|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|x64.ActiveCfg = Release|x64\n\t\t{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|x64.Build.0 = Release|x64\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Mixed Platforms.ActiveCfg = Debug|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Mixed Platforms.Build.0 = Debug|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Win32.ActiveCfg = Debug|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Win32.Build.0 = Debug|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|x64.Build.0 = Debug|x64\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Mixed Platforms.ActiveCfg = Release|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Mixed Platforms.Build.0 = Release|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Win32.ActiveCfg = Release|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Win32.Build.0 = Release|Win32\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|x64.ActiveCfg = Release|x64\n\t\t{39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|x64.Build.0 = Release|x64\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Mixed Platforms.ActiveCfg = Debug|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Mixed Platforms.Build.0 = Debug|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Win32.ActiveCfg = Debug|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Win32.Build.0 = Debug|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|x64.Build.0 = Debug|x64\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Mixed Platforms.ActiveCfg = Release|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Mixed Platforms.Build.0 = Release|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Win32.ActiveCfg = Release|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Win32.Build.0 = Release|Win32\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|x64.ActiveCfg = Release|x64\n\t\t{68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|x64.Build.0 = Release|x64\n\tEndGlobalSection\n\tGlobalSection(SolutionProperties) = preSolution\n\t\tHideSolutionNode = FALSE\n\tEndGlobalSection\nEndGlobal\n"
  },
  {
    "path": "FaceCept3D/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\n\nadd_subdirectory(HeadPoseEstimationFramework)\nadd_subdirectory(TemplateCreator)\n"
  },
  {
    "path": "FaceCept3D/Config/CommonProperties.props",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ImportGroup Label=\"PropertySheets\" />\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup>\n    <IncludePath>$(KINECTSDK10_DIR)/inc;$(FACECEPT3D_3RDPARTY_DIR)\\Boost\\include;$(FACECEPT3D_3RDPARTY_DIR)\\Eigen\\include;$(FACECEPT3D_3RDPARTY_DIR)\\FLANN\\include;$(FACECEPT3D_3RDPARTY_DIR)\\Opencv\\include;$(FACECEPT3D_3RDPARTY_DIR)\\PCL\\include;$(FACECEPT3D_3RDPARTY_DIR)\\Qhull\\include;$(FACECEPT3D_3RDPARTY_DIR)\\Qt\\include;$(FACECEPT3D_3RDPARTY_DIR)\\VTK\\include\\vtk-5.8;$(ProjectDir);$(VCInstallDir)include;$(VCInstallDir)atlmfc\\include;$(WindowsSdkDir)include;$(FrameworkSDKDir)\\include</IncludePath>\n    <_PropertySheetDisplayName>CommonPropertiesPaths</_PropertySheetDisplayName>\n  </PropertyGroup>\n  <ItemDefinitionGroup>\n    <ClCompile />\n    <ClCompile>\n      <AdditionalOptions>/Zm500 %(AdditionalOptions)</AdditionalOptions>\n    </ClCompile>\n  </ItemDefinitionGroup>\n  <ItemGroup />\n</Project>"
  },
  {
    "path": "FaceCept3D/Config/CommonPropertiesLibsDebug.props",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ImportGroup Label=\"PropertySheets\" />\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup />\n  <PropertyGroup>\n    <LibraryPath>$(KINECTSDK10_DIR)\\lib\\amd64;$(FACECEPT3D_3RDPARTY_DIR)\\Boost\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\FLANN\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Opencv\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\PCL\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Qhull\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Qt\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\VTK\\lib\\vtk-5.8;$(VCInstallDir)lib\\amd64;$(VCInstallDir)atlmfc\\lib\\amd64;$(WindowsSdkDir)lib\\x64</LibraryPath>\n  </PropertyGroup>\n  <ItemDefinitionGroup>\n    <Link>\n      <AdditionalDependencies>HeadPoseEstimationFramework_Debug.lib;opencv_calib3d2410d.lib;opencv_contrib2410d.lib;opencv_core2410d.lib;opencv_features2d2410d.lib;opencv_flann2410d.lib;opencv_gpu2410d.lib;opencv_highgui2410d.lib;opencv_imgproc2410d.lib;opencv_legacy2410d.lib;opencv_ml2410d.lib;opencv_nonfree2410d.lib;opencv_objdetect2410d.lib;opencv_ocl2410d.lib;opencv_photo2410d.lib;opencv_stitching2410d.lib;opencv_superres2410d.lib;opencv_ts2410d.lib;opencv_video2410d.lib;opencv_videostab2410d.lib;Kinect10.lib;opengl32.lib;qhullstatic_d.lib;pcl_common_debug.lib;pcl_features_debug.lib;pcl_filters_debug.lib;pcl_io_debug.lib;pcl_io_ply_debug.lib;pcl_kdtree_debug.lib;pcl_keypoints_debug.lib;pcl_octree_debug.lib;pcl_outofcore_debug.lib;pcl_people_debug.lib;pcl_recognition_debug.lib;pcl_registration_debug.lib;pcl_sample_consensus_debug.lib;pcl_search_debug.lib;pcl_segmentation_debug.lib;pcl_surface_debug.lib;pcl_tracking_debug.lib;pcl_visualization_debug.lib;flann_cpp_s-gd.lib;libboost_atomic-vc100-mt-gd-1_55.lib;libboost_chrono-vc100-mt-gd-1_55.lib;libboost_context-vc100-mt-gd-1_55.lib;libboost_coroutine-vc100-mt-gd-1_55.lib;libboost_date_time-vc100-mt-gd-1_55.lib;libboost_exception-vc100-mt-gd-1_55.lib;libboost_filesystem-vc100-mt-gd-1_55.lib;libboost_graph-vc100-mt-gd-1_55.lib;libboost_iostreams-vc100-mt-gd-1_55.lib;libboost_locale-vc100-mt-gd-1_55.lib;libboost_log_setup-vc100-mt-gd-1_55.lib;libboost_log-vc100-mt-gd-1_55.lib;libboost_math_c99f-vc100-mt-gd-1_55.lib;libboost_math_c99l-vc100-mt-gd-1_55.lib;libboost_math_c99-vc100-mt-gd-1_55.lib;libboost_math_tr1f-vc100-mt-gd-1_55.lib;libboost_math_tr1l-vc100-mt-gd-1_55.lib;libboost_math_tr1-vc100-mt-gd-1_55.lib;libboost_prg_exec_monitor-vc100-mt-gd-1_55.lib;libboost_program_options-vc100-mt-gd-1_55.lib;libboost_random-vc100-mt-gd-1_55.lib;libboost_regex-vc100-mt-gd-1_55.lib;libboost_serialization-vc100-mt-gd-1_55.lib;libboost_signals-vc100-mt-gd-1_55.lib;libboost_system-vc100-mt-gd-1_55.lib;libboost_test_exec_monitor-vc100-mt-gd-1_55.lib;libboost_thread-vc100-mt-gd-1_55.lib;libboost_timer-vc100-mt-gd-1_55.lib;libboost_unit_test_framework-vc100-mt-gd-1_55.lib;libboost_wave-vc100-mt-gd-1_55.lib;libboost_wserialization-vc100-mt-gd-1_55.lib;phonond4.lib;QAxContainerd.lib;QAxServerd.lib;Qt3Supportd4.lib;QtCLucened4.lib;QtCored4.lib;QtDeclaratived4.lib;QtDesignerComponentsd4.lib;QtDesignerd4.lib;QtGuid4.lib;QtHelpd4.lib;qtmaind.lib;QtMultimediad4.lib;QtNetworkd4.lib;QtOpenGLd4.lib;QtScriptd4.lib;QtScriptToolsd4.lib;QtSqld4.lib;QtSvgd4.lib;QtTestd4.lib;QtUiToolsd.lib;QtWebKitd4.lib;QtXmld4.lib;QtXmlPatternsd4.lib;MapReduceMPI-gd.lib;mpistubs-gd.lib;QVTK-gd.lib;vtkalglib-gd.lib;vtkCharts-gd.lib;vtkCommon-gd.lib;vtkDICOMParser-gd.lib;vtkexoIIc-gd.lib;vtkexpat-gd.lib;vtkFiltering-gd.lib;vtkfreetype-gd.lib;vtkftgl-gd.lib;vtkGenericFiltering-gd.lib;vtkGeovis-gd.lib;vtkGraphics-gd.lib;vtkhdf5-gd.lib;vtkHybrid-gd.lib;vtkImaging-gd.lib;vtkInfovis-gd.lib;vtkIO-gd.lib;vtkjpeg-gd.lib;vtklibxml2-gd.lib;vtkmetaio-gd.lib;vtkNetCDF-gd.lib;vtkNetCDF_cxx-gd.lib;vtkpng-gd.lib;vtkproj4-gd.lib;vtkRendering-gd.lib;vtksqlite-gd.lib;vtksys-gd.lib;vtktiff-gd.lib;vtkverdict-gd.lib;vtkViews-gd.lib;vtkVolumeRendering-gd.lib;vtkWidgets-gd.lib;vtkzlib-gd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup />\n</Project>"
  },
  {
    "path": "FaceCept3D/Config/CommonPropertiesLibsRelease.props",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ImportGroup Label=\"PropertySheets\" />\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup />\n  <PropertyGroup>\n    <LibraryPath>$(KINECTSDK10_DIR)\\lib\\amd64;$(FACECEPT3D_3RDPARTY_DIR)\\Boost\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\FLANN\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Opencv\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\PCL\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Qhull\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\Qt\\lib;$(FACECEPT3D_3RDPARTY_DIR)\\VTK\\lib\\vtk-5.8;$(VCInstallDir)lib\\amd64;$(VCInstallDir)atlmfc\\lib\\amd64;$(WindowsSdkDir)lib\\x64</LibraryPath>\n  </PropertyGroup>\n  <ItemDefinitionGroup>\n    <Link>\n      <AdditionalDependencies>HeadPoseEstimationFramework_Release.lib;Kinect10.lib;opengl32.lib;MapReduceMPI.lib;mpistubs.lib;QVTK.lib;vtkalglib.lib;vtkCharts.lib;vtkCommon.lib;vtkDICOMParser.lib;vtkexoIIc.lib;vtkexpat.lib;vtkFiltering.lib;vtkfreetype.lib;vtkftgl.lib;vtkGenericFiltering.lib;vtkGeovis.lib;vtkGraphics.lib;vtkhdf5.lib;vtkHybrid.lib;vtkImaging.lib;vtkInfovis.lib;vtkIO.lib;vtkjpeg.lib;vtklibxml2.lib;vtkmetaio.lib;vtkNetCDF.lib;vtkNetCDF_cxx.lib;vtkpng.lib;vtkproj4.lib;vtkRendering.lib;vtksqlite.lib;vtksys.lib;vtktiff.lib;vtkverdict.lib;vtkViews.lib;vtkVolumeRendering.lib;vtkWidgets.lib;vtkzlib.lib;opencv_calib3d2410.lib;opencv_contrib2410.lib;opencv_core2410.lib;opencv_features2d2410.lib;opencv_flann2410.lib;opencv_gpu2410.lib;opencv_highgui2410.lib;opencv_imgproc2410.lib;opencv_legacy2410.lib;opencv_ml2410.lib;opencv_nonfree2410.lib;opencv_objdetect2410.lib;opencv_ocl2410.lib;opencv_photo2410.lib;opencv_stitching2410.lib;opencv_superres2410.lib;opencv_ts2410.lib;opencv_video2410.lib;opencv_videostab2410.lib;qhullstatic.lib;pcl_common_release.lib;pcl_features_release.lib;pcl_filters_release.lib;pcl_io_ply_release.lib;pcl_io_release.lib;pcl_kdtree_release.lib;pcl_keypoints_release.lib;pcl_octree_release.lib;pcl_outofcore_release.lib;pcl_people_release.lib;pcl_recognition_release.lib;pcl_registration_release.lib;pcl_sample_consensus_release.lib;pcl_search_release.lib;pcl_segmentation_release.lib;pcl_surface_release.lib;pcl_tracking_release.lib;pcl_visualization_release.lib;flann_cpp_s.lib;libboost_atomic-vc100-mt-1_55.lib;libboost_chrono-vc100-mt-1_55.lib;libboost_context-vc100-mt-1_55.lib;libboost_coroutine-vc100-mt-1_55.lib;libboost_date_time-vc100-mt-1_55.lib;libboost_exception-vc100-mt-1_55.lib;libboost_filesystem-vc100-mt-1_55.lib;libboost_graph-vc100-mt-1_55.lib;libboost_iostreams-vc100-mt-1_55.lib;libboost_locale-vc100-mt-1_55.lib;libboost_log-vc100-mt-1_55.lib;libboost_log_setup-vc100-mt-1_55.lib;libboost_math_c99-vc100-mt-1_55.lib;libboost_math_c99f-vc100-mt-1_55.lib;libboost_math_c99l-vc100-mt-1_55.lib;libboost_math_tr1-vc100-mt-1_55.lib;libboost_math_tr1f-vc100-mt-1_55.lib;libboost_math_tr1l-vc100-mt-1_55.lib;libboost_prg_exec_monitor-vc100-mt-1_55.lib;libboost_program_options-vc100-mt-1_55.lib;libboost_random-vc100-mt-1_55.lib;libboost_regex-vc100-mt-1_55.lib;libboost_serialization-vc100-mt-1_55.lib;libboost_signals-vc100-mt-1_55.lib;libboost_system-vc100-mt-1_55.lib;libboost_test_exec_monitor-vc100-mt-1_55.lib;libboost_thread-vc100-mt-1_55.lib;libboost_timer-vc100-mt-1_55.lib;libboost_unit_test_framework-vc100-mt-1_55.lib;libboost_wave-vc100-mt-1_55.lib;libboost_wserialization-vc100-mt-1_55.lib;phonon4.lib;QAxContainer.lib;QAxServer.lib;Qt3Support4.lib;QtCLucene4.lib;QtCore4.lib;QtDeclarative4.lib;QtDesigner4.lib;QtDesignerComponents4.lib;QtGui4.lib;QtHelp4.lib;qtmain.lib;QtMultimedia4.lib;QtNetwork4.lib;QtOpenGL4.lib;QtScript4.lib;QtScriptTools4.lib;QtSql4.lib;QtSvg4.lib;QtTest4.lib;QtUiTools.lib;QtWebKit4.lib;QtXml4.lib;QtXmlPatterns4.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup />\n</Project>"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/CloudMapper.cpp",
    "content": "#include \"CloudMapper.h\"\n\n#include <pcl/common/common.h>\n\n#include <pcl/registration/registration.h>\n\n#include <pcl/registration/icp.h>\n#include <pcl/registration/correspondence_rejection_sample_consensus.h>\n#include <pcl/registration/transformation_estimation_point_to_plane.h>\n#include <pcl/registration/transformation_estimation_point_to_plane_weighted.h>\n#include <pcl/registration/transformation_estimation_svd_scale.h>\n#include <pcl/registration/correspondence_estimation_normal_shooting.h>\n#include <pcl/visualization/cloud_viewer.h>\n\n#include <pcl/features/normal_3d.h>\n\n#include <pcl/correspondence.h>\n#include <pcl/search/kdtree.h>\n\n#include <boost/format.hpp>\n#include <boost/thread.hpp>\n#include <boost/thread/thread.hpp>\n#include <boost/thread/thread_time.hpp>\n\n#include <pcl/visualization/pcl_visualizer.h>\n#include <pcl/visualization/cloud_viewer.h>\n\n#include \"UI/ShowCloud.h\"\n#include \"Exception/HPEException.h\"\n#include \"Filter/Filters.h\"\n#include \"TransformationEstimationHPE.h\"\n\n#include <pcl/io/pcd_io.h>\n\nnamespace hpe\n{\n    float CalculateDistance(pcl::PointXYZRGBA &p1, pcl::PointXYZRGBA &p2)\n    {\n        Eigen::VectorXf d(3);\n        d(0) = p1.x - p2.x;\n        d(1) = p1.y - p2.y;\n        d(2) = p1.z - p2.z;\n\n        return d.norm();\n    }\n\n    CloudMapper::CloudMapper()\n        : m_icpParams(CloudMapper::ICPParams(0.05, 0.05, 0.000001, 100000, 100)),\n          m_useDynamicICP(false), m_usePointToPlaneMetric(false), m_useNormalShooting(false),\n          m_estimateScale(false)\n\n    {\n\n    }\n\n    void CloudMapper::AddCloud(CloudMapper::Cloud::Ptr &cloud, CloudMapper::Landmarks &landmarks)\n    {\n        m_pointClouds.push_back(cloud);\n        m_landmarks.push_back(landmarks);\n    }\n\n    CloudMapper::Clouds CloudMapper::GetAlignedClouds(TransformationsPtr transformations)\n    {\n        return AlignAll(m_pointClouds, m_landmarks);\n    }\n\n    CloudMapper::Cloud::Ptr CloudMapper::GetMergedCloud()\n    {\n        CloudMapper::Clouds alignedClouds = AlignAll(m_pointClouds, m_landmarks);\n        CloudMapper::Cloud::Ptr merged(new CloudMapper::Cloud);\n        auto it = alignedClouds.begin();\n        do\n        {\n            auto current = (*it);\n            *merged += *current;\n            ++it;\n        }\n        while (it != alignedClouds.end());\n\n        return merged;\n    }\n\n    CloudMapper::TransformationsPtr CloudMapper::GetTransforms()\n    {\n        CloudMapper::TransformationsPtr transforms(new CloudMapper::Transformations);\n\n        AlignAll(m_pointClouds, m_landmarks, transforms);\n\n        return transforms;\n    }\n\n    void CloudMapper::SetICPParams(CloudMapper::ICPParams params)\n    {\n        m_icpParams = params;\n    }\n\n    void CloudMapper::SetDynamicICPParams(CloudMapper::DynamicICPParams &params)\n    {\n        m_dynamicICPParams = params;\n        SetUseDynamicICP(true);\n    }\n\n    void CloudMapper::SetUseDynamicICP(bool use)\n    {\n        m_useDynamicICP = use;\n    }\n\n    void CloudMapper::SetUseNormalShooting(bool use)\n    {\n        m_useNormalShooting = use;\n    }\n\n    void CloudMapper::SetEstimateScale(bool estimateScale)\n    {\n        m_estimateScale = estimateScale;\n    }\n\n    void CloudMapper::SetUsePointToPlaneMetric(bool use)\n    {\n        m_usePointToPlaneMetric = use;\n    }\n\n    Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsPointToPlane(CloudMapper::Cloud::Ptr &source,\n            CloudMapper::Cloud::Ptr &target,\n            ICPParams params)\n    {\n        CloudMapper::NormalCloud::Ptr normalSource(new CloudMapper::NormalCloud);\n        pcl::copyPointCloud(*source, *normalSource);\n\n        CloudMapper::NormalCloud::Ptr normalTarget(new CloudMapper::NormalCloud);\n        pcl::copyPointCloud(*target, *normalTarget);\n\n        typedef pcl::search::KdTree<CloudMapper::PointNormal> Search2;\n        Search2::Ptr search(new Search2);\n\n        pcl::NormalEstimation<CloudMapper::PointNormal, CloudMapper::PointNormal> estimation;\n        estimation.setSearchMethod(search);\n        estimation.setKSearch(10);\n        estimation.setInputCloud(normalTarget);\n        estimation.compute(*normalTarget);\n\n        typedef pcl::registration::CorrespondenceRejectorSampleConsensus<CloudMapper::PointNormal> Rejector;\n\n        pcl::Registration<CloudMapper::PointNormal, CloudMapper::PointNormal>::Ptr registration(new pcl::IterativeClosestPoint<CloudMapper::PointNormal, CloudMapper::PointNormal>);\n\n        pcl::registration::CorrespondenceRejectorSampleConsensus<CloudMapper::PointNormal>::Ptr rejector(new pcl::registration::CorrespondenceRejectorSampleConsensus<CloudMapper::PointNormal>);\n        rejector->setInlierThreshold(params.RansacRejectionThreshold);\n        rejector->setMaximumIterations(params.MaxRANSACIterations);\n        rejector->setInputSource(normalSource);\n        rejector->setInputTarget(normalTarget);\n        registration->setInputSource(normalSource);\n        registration->setInputTarget(normalTarget);\n        registration->setMaxCorrespondenceDistance(params.MaxCorrespondenceDistance);\n        registration->setRANSACOutlierRejectionThreshold(params.RansacRejectionThreshold);\n        registration->setTransformationEpsilon(params.TransformationEpsilon);\n        registration->setMaximumIterations(params.MaxICPIterations);\n        registration->setRANSACIterations(params.MaxRANSACIterations);\n        registration->addCorrespondenceRejector(rejector);\n\n        if (m_useNormalShooting)\n        {\n            typedef pcl::registration::CorrespondenceEstimationNormalShooting<CloudMapper::PointNormal, CloudMapper::PointNormal, CloudMapper::PointNormal> NormalShooting;\n            NormalShooting::Ptr ce(new NormalShooting);\n            registration->setCorrespondenceEstimation(ce);\n        }\n\n        typedef TransformationEstimationHPE<CloudMapper::PointNormal, CloudMapper::PointNormal> Custom;\n        Custom::Ptr metric(new Custom);\n        metric->SetWeights(m_weights);\n        registration->setTransformationEstimation(metric);\n\n        CloudMapper::NormalCloud::Ptr c(new CloudMapper::NormalCloud);\n        registration->align(*c);\n        //std::cout << \"Fitness Score \" << registration->getFitnessScore() << std::endl;\n\n        return registration->getFinalTransformation();\n    }\n\n    Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsPointToPoint(CloudMapper::Cloud::Ptr &source,\n            CloudMapper::Cloud::Ptr &target,\n            ICPParams params)\n    {\n        pcl::Registration<CloudMapper::PointType, CloudMapper::PointType>::Ptr registration(new pcl::IterativeClosestPoint<CloudMapper::PointType, CloudMapper::PointType>);\n        typedef pcl::registration::CorrespondenceRejectorSampleConsensus<CloudMapper::PointType> Rejector;\n\n        Rejector::Ptr rejector(new Rejector);\n        rejector->setInlierThreshold(params.RansacRejectionThreshold);\n        rejector->setMaximumIterations(params.MaxRANSACIterations);\n        rejector->setInputSource(source);\n        rejector->setInputTarget(target);\n\n        registration->setInputSource(source);\n        registration->setInputTarget(target);\n        registration->setMaxCorrespondenceDistance(params.MaxCorrespondenceDistance);\n        registration->setRANSACOutlierRejectionThreshold(params.RansacRejectionThreshold);\n        registration->setTransformationEpsilon(params.TransformationEpsilon);\n        registration->setMaximumIterations(params.MaxICPIterations);\n        registration->setRANSACIterations(params.MaxRANSACIterations);\n        registration->addCorrespondenceRejector(rejector);\n\n        CloudMapper::Cloud::Ptr c(new CloudMapper::Cloud);\n        registration->align(*c);\n\n        std::cout << \"Fitness Score \" << registration->getFitnessScore() << std::endl;\n\n        return registration->getFinalTransformation();\n    }\n\n    Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsDynamically(CloudMapper::Cloud::Ptr &source,\n            CloudMapper::Cloud::Ptr &target,\n            CloudMapper::DynamicICPParams &dynamicParams)\n    {\n\n        int icpRound = 1;\n\n        CloudMapper::Cloud::Ptr result(new Cloud);\n        pcl::copyPointCloud(*source, *result);\n        Eigen::Matrix4f finalTransformationMatrix = Eigen::Matrix4f::Identity();\n        for (auto it = dynamicParams.begin(); it != dynamicParams.end(); ++it)\n        {\n            auto params = *it;\n            Eigen::Matrix4f stepTransform = Eigen::Matrix4f::Identity();\n\n            if (m_usePointToPlaneMetric)\n            {\n                stepTransform = GetTransformForTwoCloudsPointToPlane(result, target, params);\n            }\n            else\n            {\n                stepTransform = GetTransformForTwoCloudsPointToPoint(result, target, params);\n            }\n\n            pcl::transformPointCloud(*result, *result, stepTransform);\n\n            finalTransformationMatrix = finalTransformationMatrix * stepTransform;\n        }\n\n        return finalTransformationMatrix;\n    }\n\n    Eigen::Matrix4f CloudMapper::GetTransformHavingLandmarks(CloudMapper::Cloud::Ptr &source, Common<CloudMapper::PointType>::Landmarks &l1,\n            CloudMapper::Cloud::Ptr &target, Common<CloudMapper::PointType>::Landmarks &l2)\n    {\n        pcl::CorrespondencesPtr correspondences = LandmarksToCorresponencies(source, l1, target, l2);\n        return GetTransformHavingCorrespondences(source, target, correspondences);\n    }\n\n    Eigen::Matrix4f CloudMapper::GetTransformHavingCorrespondences(CloudMapper::Cloud::Ptr &source, CloudMapper::Cloud::Ptr &target, pcl::CorrespondencesPtr &correspondences)\n    {\n        Eigen::Matrix4f initialAlignmentMatrix = ComputeInitialAlignment(source, target, correspondences);\n        bool initialAlignmentSucceed = IsNotNAN(initialAlignmentMatrix);\n\n        CloudMapper::Cloud::Ptr sourceTransformed(new CloudMapper::Cloud);\n        if (initialAlignmentSucceed)\n        {\n            pcl::transformPointCloud(*source, *sourceTransformed, initialAlignmentMatrix);\n        }\n        else\n        {\n            sourceTransformed = source;\n        }\n\n        if (m_estimateScale)\n        {\n            ShowTwoCloudsInDifferentColors<CloudMapper::PointType>(sourceTransformed, target, pcl::CorrespondencesPtr(), \"After IA\");\n        }\n\n        CloudMapper::DynamicICPParams dynamicParams = m_dynamicICPParams;\n        if (m_useDynamicICP == false)\n        {\n            dynamicParams.push_back(m_icpParams);\n        }\n\n        CloudMapper::Cloud::Ptr result = sourceTransformed;\n        Eigen::Matrix4f finalTransformationMatrix = GetTransformForTwoCloudsDynamically(sourceTransformed, target, dynamicParams);\n\n\n        Eigen::Matrix4f totalTransform;\n        if (initialAlignmentSucceed)\n        {\n            totalTransform = finalTransformationMatrix * initialAlignmentMatrix;\n        }\n        else\n        {\n            totalTransform = finalTransformationMatrix;\n        }\n\n        pcl::transformPointCloud(*source, *result, totalTransform);\n\n        return totalTransform;\n    }\n\n    CloudMapper::Clouds CloudMapper::AlignAll(CloudMapper::Clouds &clouds, CloudMapper::LandmarksVector &landmarksVector, TransformationsPtr transformations)\n    {\n        TransformationsPtr allTransformations(new Transformations);\n        if (transformations != nullptr)\n        {\n            allTransformations = transformations;\n        }\n\n        if (clouds.size() != landmarksVector.size())\n        {\n            throw HPEException(\"CloudMapper::AlignAll - clouds.size() != landmarksVector.size()\");\n        }\n\n        std::vector<Eigen::Matrix4f> stepTransformations;\n\n        pcl::PointCloud<CloudMapper::PointType>::Ptr sourceCloud = clouds[0];\n\n        for (int cloudIndex = 1; cloudIndex < clouds.size() - 1; cloudIndex++)\n        {\n            int sourceIndex = cloudIndex;\n            int targetIndex = sourceIndex + 1;\n\n            pcl::PointCloud<CloudMapper::PointType>::Ptr targetCloud = clouds[targetIndex];\n\n            auto totalTransform = GetTransformHavingLandmarks(sourceCloud, landmarksVector[sourceIndex],\n                                  targetCloud, landmarksVector[targetIndex]);\n\n            Eigen::Matrix3f rotationMatrix = totalTransform.block<3, 3>(0, 0);\n            Eigen::Vector3f translationVector = totalTransform.block<3, 1>(0, 3);\n            Eigen::Quaternionf quaternion(rotationMatrix);\n            Eigen::Matrix3f rotation2(quaternion);\n\n            pcl::transformPointCloud(*sourceCloud, *sourceCloud, totalTransform);\n            pcl::PointCloud<CloudMapper::PointType>::Ptr newCloud(new pcl::PointCloud<CloudMapper::PointType>);\n            *sourceCloud += *targetCloud;\n            sourceCloud = Voxelize<CloudMapper::PointType>(sourceCloud, 0.006f);\n\n            //ShowCloud<CloudMapper::PointType>(sourceCloud);\n\n            if (IsNotNAN(totalTransform) == false)\n            {\n                int q = 42;\n            }\n\n            std::cout << cloudIndex << std::endl;\n\n            stepTransformations.push_back(totalTransform);\n        }\n\n        ShowCloud<CloudMapper::PointType>(sourceCloud);\n\n        CloudMapper::Clouds result;\n\n        auto targetCloud = clouds[clouds.size() - 1];\n\n        for (int cloudIndex = 0; cloudIndex < clouds.size() - 1; cloudIndex++)\n        {\n            Eigen::Matrix4f accumulatedMatrix = stepTransformations[cloudIndex];\n\n            for (int matrixIndex = cloudIndex + 1; matrixIndex < stepTransformations.size(); matrixIndex++)\n            {\n                accumulatedMatrix = accumulatedMatrix * stepTransformations[matrixIndex];\n            }\n\n            CloudMapper::Cloud::Ptr transformed(new CloudMapper::Cloud);\n            pcl::transformPointCloud(*clouds[cloudIndex], *transformed, accumulatedMatrix);\n            allTransformations->push_back(accumulatedMatrix);\n\n            result.push_back(transformed);\n        }\n\n        result.push_back(targetCloud);\n\n        return result;\n    }\n\n    Eigen::Matrix4f CloudMapper::ComputeInitialAlignment(Cloud::Ptr &cloud1, Cloud::Ptr &cloud2, pcl::CorrespondencesPtr &correspondences)\n    {\n        if (correspondences->empty())\n        {\n            return Eigen::Matrix4f::Identity();\n        }\n\n        pcl::registration::CorrespondenceRejectorSampleConsensus<PointType> sac;\n        sac.setInlierThreshold(0.5);\n        sac.setInputSource(cloud1);\n        sac.setInputTarget(cloud2);\n        sac.setInputCorrespondences(correspondences);\n        pcl::Correspondences inliers;\n        sac.getCorrespondences(inliers);\n\n        Eigen::Matrix4f transformation;\n        if (m_estimateScale)\n        {\n            pcl::registration::TransformationEstimationSVDScale<PointType, PointType> transformationEstimation;\n            transformationEstimation.estimateRigidTransformation(*cloud1, *cloud2, inliers, transformation);\n        }\n        else\n        {\n            pcl::registration::TransformationEstimationSVD<PointType, PointType> transformationEstimation;\n            transformationEstimation.estimateRigidTransformation(*cloud1, *cloud2, inliers, transformation);\n        }\n\n\n        if (transformation == Eigen::Matrix4f::Identity() || IsNotNAN(transformation) == false)\n        {\n            std::cout << \"Identity or NAN\" << std::endl;\n        }\n\n        return transformation;\n    }\n\n    Common<CloudMapper::PointType>::Landmark CloudMapper::FindClosestPoint(CloudMapper::Cloud::Ptr &cloud, CloudMapper::PointType &p)\n    {\n        pcl::search::KdTree<CloudMapper::PointType> search;\n        search.setInputCloud(cloud);\n\n        std::vector<int> indices;\n        std::vector<float> distances;\n        search.nearestKSearch(p, 1, indices, distances);\n\n        Common<CloudMapper::PointType>::Landmark result;\n        result.index = indices[0];\n        result.point = cloud->at(indices[0]);\n\n        return result;\n    }\n\n    pcl::CorrespondencesPtr CloudMapper::LandmarksToCorresponencies(Cloud::Ptr &cloud1, CloudMapper::Landmarks &landmarks1, Cloud::Ptr &cloud2, CloudMapper::Landmarks &landmarks2)\n    {\n        pcl::CorrespondencesPtr result(new pcl::Correspondences);\n\n        int resultSize = std::min(landmarks1.size(), landmarks2.size());\n\n        for (int i = 0; i < resultSize; i++)\n        {\n            auto sourcePoint1 = landmarks1.at(i).point;\n            auto sourcePoint2 = landmarks2.at(i).point;\n\n            auto p1IsNotNan = IsNotNAN(sourcePoint1);\n            auto p2IsNotNan = IsNotNAN(sourcePoint2);\n            if (p1IsNotNan && p2IsNotNan)\n            {\n                CloudMapper::Landmarks::value_type pt1 = FindClosestPoint(cloud1, sourcePoint1);\n                CloudMapper::Landmarks::value_type pt2 = FindClosestPoint(cloud2, sourcePoint2);\n\n                float distance = CalculateDistance(pt1.point, pt2.point);\n                pcl::Correspondence correspondence(pt1.index, pt2.index, distance);\n\n                result->push_back(correspondence);\n            }\n        }\n\n        return result;\n    }\n\n    bool CloudMapper::IsNotNAN(const Eigen::Matrix4f &x)\n    {\n        return ((x.array() == x.array())).all();\n    }\n\n    bool CloudMapper::IsNotNAN(CloudMapper::PointType &point)\n    {\n        return (boost::math::isnan(point.x) || boost::math::isnan(point.y) || boost::math::isnan(point.z)) == false;\n    }\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/CloudMapper.h",
    "content": "#ifndef CLOUDMAPPER_H\n#define CLOUDMAPPER_H\n\n#include <memory>\n\n\n#include <vector>\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <pcl/correspondence.h>\n\n\n//#include \"Provider_Old/DataProviderOld.h\"\n\n#include <Common.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tCloudMapper\n    \n     \\brief\tClass that performs Iterative closest point alighment.\n    \n     */\n\n    class CloudMapper\n    {\n        public:\n            typedef pcl::PointXYZRGBA PointType;\n            typedef pcl::PointCloud<PointType> Cloud;\n\n            typedef pcl::PointNormal PointNormal;\n            typedef pcl::PointCloud<PointNormal> NormalCloud;\n\n            typedef Common<PointType>::Landmarks Landmarks;\n            typedef std::vector<Landmarks> LandmarksVector;\n            typedef std::vector<Cloud::Ptr> Clouds;\n\n            typedef std::vector<Eigen::Matrix4f> Transformations;\n            typedef std::shared_ptr<std::vector<Eigen::Matrix4f>> TransformationsPtr;\n\n            struct ICPParams\n            {\n                ICPParams(float distance, float threshold, float epsilon, int maxICPIterations, int maxRansacIterations)\n                    : MaxCorrespondenceDistance(distance),\n                      RansacRejectionThreshold(threshold),\n                      TransformationEpsilon(epsilon),\n                      MaxICPIterations(maxICPIterations),\n                      MaxRANSACIterations(maxRansacIterations)\n                {\n\n                }\n\n                float MaxCorrespondenceDistance;\n                float RansacRejectionThreshold;\n                float TransformationEpsilon;\n                int MaxICPIterations;\n                int MaxRANSACIterations;\n            };\n\n            typedef std::vector<ICPParams> DynamicICPParams;\n\n            CloudMapper();\n\n            void AddCloud(Cloud::Ptr &cloud, Landmarks &landmarks);\n            Clouds GetAlignedClouds(TransformationsPtr transformations = nullptr);\n            Cloud::Ptr GetMergedCloud();\n            TransformationsPtr GetTransforms();\n\n            void SetICPParams(ICPParams params);\n            void SetDynamicICPParams(DynamicICPParams &params);\n            void SetUseDynamicICP(bool use);\n            void SetUseNormalShooting(bool use);\n            void SetEstimateScale(bool estimateScale);\n            void SetUsePointToPlaneMetric(bool use);\n            void SetWeights(std::vector<double> &weights) {\n                m_weights = weights;\n            }\n\n            Eigen::Matrix4f GetTransformForTwoCloudsPointToPlane(Cloud::Ptr &source,\n                    Cloud::Ptr &target,\n                    ICPParams params = ICPParams(0.05, 0.05, 0.000001, 100000, 100));\n\n\n            Eigen::Matrix4f GetTransformForTwoCloudsPointToPoint(Cloud::Ptr &source,\n                    Cloud::Ptr &target,\n                    ICPParams params = ICPParams(0.05, 0.05, 0.000001, 100000, 100));\n\n            Eigen::Matrix4f GetTransformForTwoCloudsDynamically(Cloud::Ptr &source,\n                    Cloud::Ptr &target,\n                    DynamicICPParams &dynamicParams);\n\n            Eigen::Matrix4f GetTransformHavingLandmarks(Cloud::Ptr &source,\n                    Common<PointType>::Landmarks &l1,\n                    Cloud::Ptr &target,\n                    Common<PointType>::Landmarks &l2);\n\n            Eigen::Matrix4f GetTransformHavingCorrespondences(Cloud::Ptr &source,\n                    Cloud::Ptr &target,\n                    pcl::CorrespondencesPtr &correspondences);\n\n\n            pcl::CorrespondencesPtr LandmarksToCorresponencies(Cloud::Ptr &cloud1, Landmarks &landmarks1,\n                    Cloud::Ptr &cloud2, Landmarks &landmarks2);\n\n            Eigen::Matrix4f ComputeInitialAlignment(Cloud::Ptr &cloud1,\n                                                    Cloud::Ptr &cloud2,\n                                                    pcl::CorrespondencesPtr &correspondencies);\n\n\n\n        private:\n            Clouds m_pointClouds;\n            LandmarksVector m_landmarks;\n\n            Clouds AlignAll(Clouds &clouds, LandmarksVector &landmarksVector, TransformationsPtr transformations = nullptr);\n            static Common<PointType>::Landmark FindClosestPoint(Cloud::Ptr &cloud, PointType &p);\n\n            ICPParams m_icpParams;\n            DynamicICPParams m_dynamicICPParams;\n            bool m_useDynamicICP;\n            bool m_usePointToPlaneMetric;\n            bool m_useNormalShooting;\n            bool m_estimateScale;\n\n            std::vector<double> m_weights;\n\n            static bool IsNotNAN(const Eigen::Matrix4f &x);\n            static bool IsNotNAN(CloudMapper::PointType &point);\n    };\n}\n\n#endif // CLOUDMAPPER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/IHeadTemplateCreator.h",
    "content": "#ifndef IHEADTEMPLATECREATOR_H\n#define IHEADTEMPLATECREATOR_H\n\n#include <pcl/point_cloud.h>\n\n#include <Interface/IInterface.h>\n\nnamespace hpe\n{\n    template <typename PointType>\n    class IHeadTemplateCreator : public IInterface\n    {\n        public:\n            virtual typename pcl::PointCloud<PointType>::Ptr GetTemplate() = 0;\n    };\n}\n\n\n#endif // IHEADTEMPLATECREATOR_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/IncrementalHeadTemplateCreator.cpp",
    "content": "#include \"IncrementalHeadTemplateCreator.h\"\n#include <Filter/Filters.h>\n#include <pcl/common/io.h>\n#include <pcl/common/time.h>\n#include <UI/ShowCloud.h>\n\nnamespace hpe\n{\n\n    IncrementalHeadTemplateCreator::IncrementalHeadTemplateCreator(void)\n        : m_template(new CloudType), m_targetCloud(new CloudType)\n    {\n        m_mapper.SetUseNormalShooting(false);\n        m_mapper.SetUsePointToPlaneMetric(true);\n\n        m_icpParams.push_back(CloudMapper::ICPParams(0.03, 0.005, 10e-6, 400, 10));\n        m_mapper.SetDynamicICPParams(m_icpParams);\n\n        m_aggregatedTransform = Eigen::Matrix4f::Identity();\n    }\n\n\n    IncrementalHeadTemplateCreator::~IncrementalHeadTemplateCreator(void)\n    {\n    }\n\n    IncrementalHeadTemplateCreator::CloudType::Ptr IncrementalHeadTemplateCreator::GetTemplate()\n    {\n        CloudType::Ptr result(new CloudType);\n        pcl::copyPointCloud(*m_template, *result);\n        return result;\n    }\n\n    IncrementalHeadTemplateCreator::CloudType::Ptr IncrementalHeadTemplateCreator::AddCloudToTemplate(CloudType::Ptr cloud)\n    {\n        if (m_template->points.size() == 0)\n        {\n            pcl::copyPointCloud(*cloud, *m_template);\n            pcl::copyPointCloud(*cloud, *m_targetCloud);\n        }\n        else\n        {\n            pcl::ScopeTime t(\"AddCloudToTemplate\");\n            Eigen::Matrix4f transformMatrix = m_mapper.GetTransformForTwoCloudsDynamically(cloud, m_targetCloud, m_icpParams);\n            m_aggregatedTransform *= transformMatrix;\n\n            CloudType::Ptr transformedSource(new CloudType);\n\n            pcl::transformPointCloud(*cloud, *transformedSource, m_aggregatedTransform);\n\n            *m_template += *transformedSource;\n            m_template = Voxelize<PointType>(m_template, 0.001f);\n\n            pcl::copyPointCloud(*cloud, *m_targetCloud);\n        }\n\n        return GetTemplate();\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/IncrementalHeadTemplateCreator.h",
    "content": "#pragma once\n\n#ifndef INCREMENTALHEADTEMPLATECREATOR_H\n#define INCREMENTALHEADTEMPLATECREATOR_H\n\n#include <Align/IHeadTemplateCreator.h>\n#include <Align/CloudMapper.h>\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tIncrementalHeadTemplateCreator\n    \n     \\brief\tClass that performs incremental alighment. \n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class IncrementalHeadTemplateCreator : public IHeadTemplateCreator<pcl::PointXYZRGBA>\n    {\n        public:\n            typedef pcl::PointXYZRGBA PointType;\n            typedef pcl::PointCloud<PointType> CloudType;\n\n            IncrementalHeadTemplateCreator(void);\n            ~IncrementalHeadTemplateCreator(void);\n\n            CloudType::Ptr GetTemplate();\n\n            CloudType::Ptr AddCloudToTemplate(CloudType::Ptr cloud);\n\n        private:\n            CloudType::Ptr m_template;\n            CloudType::Ptr m_targetCloud;\n            CloudMapper m_mapper;\n            CloudMapper::DynamicICPParams m_icpParams;\n            Eigen::Matrix4f m_aggregatedTransform;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/TransformationEstimationHPE.cpp",
    "content": "#include \"TransformationEstimationHPE.h\"\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Align/TransformationEstimationHPE.h",
    "content": "#ifndef TRANSFORMATIONESTIMATION_H\n#define TRANSFORMATIONESTIMATION_H\n\n#include <pcl/point_cloud.h>\n\n#include <pcl/registration/transformation_estimation_point_to_plane_weighted.h>\n\nnamespace hpe\n{\n\t/**\n\t \\class\tTransformationEstimationHPE\n\t\n\t \\brief\tClass that performs history-based weighting of the points for ICP\n\t\t\tThanks to this, ICP is speeded up 4 times. The method overloads \n\t\t\tthe standard pcl class to get the desired functionality\n\n\t\n\t \\tparam\tPointSource\tPointType of the source point cloud.\n\t \\tparam\tPointTarget\tPointType of the target point cloud.\n\t \\tparam\tMatScalar  \tscalar type.\n\t */\n\n\ttemplate <typename PointSource, typename PointTarget, typename MatScalar = float>\n    class TransformationEstimationHPE : public pcl::registration::TransformationEstimationPointToPlaneWeighted<PointSource, PointTarget, MatScalar>\n    {\n            typedef TransformationEstimationHPE<PointSource, PointTarget, MatScalar> Self;\n            typedef pcl::registration::TransformationEstimationPointToPlaneWeighted<PointSource, PointTarget, MatScalar> Base;\n            typedef typename Base::Matrix4 Matrix4;\n        public:\n            typedef boost::shared_ptr<TransformationEstimationHPE<PointSource, PointTarget, MatScalar>> Ptr;\n\n            TransformationEstimationHPE()\n            {\n            }\n\n            void SetWeights(std::vector<double> &weights)\n            {\n                m_weights = weights;\n            }\n\n            inline void\n            estimateRigidTransformation(\n                const pcl::PointCloud<PointSource> &cloud_src,\n                const pcl::PointCloud<PointTarget> &cloud_tgt,\n                Matrix4 &transformation_matrix) const\n            {\n                Base::estimateRigidTransformation(cloud_src, cloud_tgt, transformation_matrix);\n            }\n\n            inline void\n            estimateRigidTransformation(\n                const pcl::PointCloud<PointSource> &cloud_src,\n                const std::vector<int> &indices_src,\n                const pcl::PointCloud<PointTarget> &cloud_tgt,\n                Matrix4 &transformation_matrix) const\n            {\n                Base::estimateRigidTransformation(cloud_src, indices_src, cloud_tgt, transformation_matrix);\n            }\n\n            void\n            estimateRigidTransformation(\n                const pcl::PointCloud<PointSource> &cloud_src,\n                const std::vector<int> &indices_src,\n                const pcl::PointCloud<PointTarget> &cloud_tgt,\n                const std::vector<int> &indices_tgt,\n                Matrix4 &transformation_matrix) const\n            {\n                std::vector<double> weights(indices_src.size());\n\n                if (m_weights.size() != 0)\n                {\n                    for (int i = 0; i < indices_src.size(); i += 1)\n                    {\n                        weights[i] = m_weights[indices_src[i]];\n                    }\n                }\n                else\n                {\n                    for (int i = 0; i < indices_src.size(); i += 1)\n                    {\n                        weights[i] = 1;\n                    }\n                }\n\n                Self *self = const_cast<Self *>(this);\n                self->Base::setWeights(weights);\n                self->Base::setUseCorrespondenceWeights(false);\n\n                Base::estimateRigidTransformation(cloud_src, indices_src, cloud_tgt, indices_tgt, transformation_matrix);\n            }\n\n            void\n            estimateRigidTransformation(\n                const pcl::PointCloud<PointSource> &cloud_src,\n                const pcl::PointCloud<PointTarget> &cloud_tgt,\n                const pcl::Correspondences &correspondences,\n                Matrix4 &transformation_matrix) const\n            {\n                Base::estimateRigidTransformation(cloud_src, cloud_tgt, correspondences, transformation_matrix);\n            }\n\n        private:\n            std::vector<double> m_weights;\n    };\n}\n\n#endif // TRANSFORMATIONESTIMATION_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\n\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n\nproject(HeadPoseEstimationFramework)\n\nFILE (GLOB_RECURSE SRC_LIST *.cpp *.h)\nforeach (SRC ${SRC_LIST})\n\tstring (FIND ${SRC} \"WindowsOnly\" FOUND)\n\tif (NOT ${FOUND} EQUAL \"-1\")\n\t\tlist (REMOVE_ITEM SRC_LIST ${SRC})\n\tendif()\nendforeach ()\n\nfind_package(PCL 1.7 REQUIRED)\nfind_package(OpenCV REQUIRED)\n\ninclude_directories(${PCL_INCLUDE_DIRS} ${CMAKE_CURRENT_LIST_DIR})\nlink_directories(${PCL_LIBRARY_DIRS} ${CMAKE_BINARY_DIR} )\nadd_definitions(${PCL_DEFINITIONS})\n\nadd_library(${PROJECT_NAME} SHARED ${SRC_LIST})\ntarget_link_libraries(${PROJECT_NAME} ${PCL_LIBRARIES} ${OpenCV_LIBS})\n\n\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Common.h",
    "content": "#pragma once\n\n#ifndef COMMON_H\n#define COMMON_H\n\n#include <vector>\n#include <list>\n\nnamespace hpe\n{\n    template <typename PointType>\n    class Common\n    {\n        public:\n            struct Landmark\n            {\n                Landmark() {}\n\n                Landmark(int i, PointType p)\n                    : index(i), point(p)\n                {}\n\n                int index;\n                PointType point;\n            };\n\n            typedef std::vector<Landmark> Landmarks;\n    };\n\n    typedef std::vector<int> PointIndexes;\n    typedef std::list<PointIndexes> PointHistory;\n}\n\n#endif // COMMON_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Converter/IDataConverter.h",
    "content": "#ifndef IDATACONVERTER_H\n#define IDATACONVERTER_H\n\n#include <pcl/point_cloud.h>\n#include <pcl/point_types.h>\n#include <opencv2/opencv.hpp>\n\n#include \"Interface/IInterface.h\"\n\nnamespace hpe\n{\n    class IDataConverter : public IInterface\n    {\n        public:\n            virtual pcl::PointCloud<pcl::PointXYZRGBA>::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) = 0;\n\n    };\n}\n\n#endif // IDATACONVERTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Converter/KinectDataConverter.cpp",
    "content": "#include \"KinectDataConverter.h\"\n\n#include <boost/math/special_functions/fpclassify.hpp>\n#include <boost/math/special_functions/round.hpp>\n\nnamespace hpe\n{\n    typedef union\n    {\n        struct\n        {\n            unsigned char Blue;\n            unsigned char Green;\n            unsigned char Red;\n            unsigned char Alpha;\n        };\n        float float_value;\n        uint32_t long_value;\n    } RGBValue;\n\n    KinectDataConverter::KinectDataConverter()\n    {\n    }\n\n    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr KinectDataConverter::DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr)\n    {\n        cv::Size depthImageSize = depth.size();\n\n        float centerX, centerY;\n        float scaleFactorX, scaleFactorY;\n\n        ComputeCameraIntrinsics(depthImageSize, centerX, centerY, scaleFactorX, scaleFactorY);\n\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr result(new pcl::PointCloud<pcl::PointXYZRGBA>);\n        result->width = depthImageSize.width;\n        result->height = depthImageSize.height;\n        result->is_dense = false;\n        result->points.resize(depthImageSize.width * depthImageSize.height);\n\n        for (int y = 0; y < depthImageSize.height; ++y)\n        {\n            for (int x = 0; x < depthImageSize.width; ++x)\n            {\n                pcl::PointXYZRGBA &pt = result->at(x, y);\n\n                float depthValue = (float) depth.at<float>(y, x);\n\n                if (depthValue == -1000.f || depthValue == 0.0f || boost::math::isnan(depthValue))\n                {\n                    pt.x = pt.y = pt.z = std::numeric_limits<float>::quiet_NaN();\n                }\n                else\n                {\n                    //depthValue /= 1000;\n                    pt.x = (static_cast<float>(x) - centerX) * scaleFactorX * depthValue;\n                    pt.y = (centerY - static_cast<float>(y)) * scaleFactorY * depthValue;\n                    pt.z = depthValue;\n\n                    cv::Vec3b colorValue = bgr.at<cv::Vec3b>(y, x);\n                    RGBValue color;\n                    color.Red = colorValue[2];\n                    color.Green = colorValue[1];\n                    color.Blue = colorValue[0];\n                    pt.rgba = color.long_value;\n                }\n            }\n        }\n\n        return result;\n    }\n\n    void KinectDataConverter::ComputeCameraIntrinsics(cv::Size &imageSize, float &centerX, float &centerY, float &scaleFactorX, float &scaleFactorY)\n    {\n        int dims[] = {imageSize.width, imageSize.height};\n\n        // The 525 factor default is only true for VGA. If not, we should scale\n        scaleFactorX = scaleFactorY = 1 / 525.f * 640.f / dims[0];\n        centerX = ((float)dims[0] - 1.f) / 2.f;\n        centerY = ((float)dims[1] - 1.f) / 2.f;\n    }\n\n    pcl::PointXYZ KinectDataConverter::ConvertOnePoint(float x, float y, float depthValue, cv::Size frameSize)\n    {\n        float centerX, centerY;\n        float scaleFactorX, scaleFactorY;\n\n        ComputeCameraIntrinsics(frameSize, centerX, centerY, scaleFactorX, scaleFactorY);\n\n        pcl::PointXYZ pt;\n        pt.x = (static_cast<float>(x) - centerX) * scaleFactorX * depthValue;\n        pt.y = (centerY - static_cast<float>(y)) * scaleFactorY * depthValue;\n        pt.z = depthValue;\n        return pt;\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Converter/KinectDataConverter.h",
    "content": "#ifndef KINECTDATACONVERTER_H\n#define KINECTDATACONVERTER_H\n\n#include <Converter/IDataConverter.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tKinectDataConverter\n    \n     \\brief\tConverts RGB-D pair to a point cloud. Work only for a kinect sensor.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class KinectDataConverter : public IDataConverter\n    {\n        public:\n            KinectDataConverter();\n            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) override;\n            pcl::PointXYZ ConvertOnePoint(float x, float y, float depthValue, cv::Size frameSize);\n\n        protected:\n            virtual void ComputeCameraIntrinsics(cv::Size &imageSize, float &centerX, float &centerY, float &scaleFactorX, float &scaleFactorY);\n    };\n}\n\n#endif // KINECTDATACONVERTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataFilter/IDataFilter.h",
    "content": "#ifndef IDATAFILTER_H\n#define IDATAFILTER_H\n\n#include \"Interface/IInterface.h\"\n\nnamespace hpe\n{\n    template <typename DataType>\n    class DataFilter : public IInterface\n    {\n        public:\n            virtual DataType Filter(DataType &input) = 0;\n    };\n}\n\n#endif // IDATAFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/CloudXYZRGBA.h",
    "content": "#pragma once\n\n#ifndef CLOUDXYZRGBA_H\n#define CLOUDXYZRGBA_H\n\n#include <DataObject/IDataObject.h>\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n    class CloudXYZRGBA : public IDataObject\n    {\n        public:\n            typedef std::shared_ptr<CloudXYZRGBA> Ptr;\n\n            CloudXYZRGBA()\n            {\n            }\n\n            CloudXYZRGBA(bool create)\n            {\n                if (create)\n                {\n                    cloud = pcl::PointCloud<pcl::PointXYZRGBA>::Ptr(new pcl::PointCloud<pcl::PointXYZRGBA>);\n                }\n            }\n\n            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/HeadPoseInfo.h",
    "content": "#ifndef HEADPOSEINFO_H\n#define HEADPOSEINFO_H\n\n#include <DataObject/IDataObject.h>\n\nnamespace hpe\n{\n    class HeadPoseInformation : public IDataObject\n    {\n        public:\n            typedef std::shared_ptr<HeadPoseInformation> Ptr;\n\n            HeadPoseInformation()\n            {\n                Yaw = 0;\n                Tilt = 0;\n                Roll = 0;\n                WorldX = 0;\n                WorldY = 0;\n                WorldZ = 0;\n                DepthFrameX = 0;\n                DepthFrameY = 0;\n                HasHeadInfo = false;\n            }\n\n            double Yaw;\n            double Tilt;\n            double Roll;\n            double WorldX;\n            double WorldY;\n            double WorldZ;\n            double DepthFrameX;\n            double DepthFrameY;\n            bool HasHeadInfo;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/IDataObject.h",
    "content": "#pragma once\n\n#ifndef IDATAOBJECT_H\n#define IDATAOBJECT_H\n\n#include <Interface/IInterface.h>\n\n#include <memory>\n\nnamespace hpe\n{\n    /**\n     \\class\tIDataObject\n    \n     \\brief\tInterface to store the objects to put into IDataStorage.\n\t\t\tSee subclasses for examples.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class IDataObject : public IInterface\n    {\n        public:\n            typedef std::shared_ptr<IDataObject> Ptr;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/IDataStorage.h",
    "content": "#pragma once\n\n#ifndef IDATASTORAGE_H\n#define IDATASTORAGE_H\n\n#include <Interface/IInterface.h>\n#include <DataObject/IDataObject.h>\n#include <Exception/HPEException.h>\n\n#include <string>\n\nnamespace hpe\n{\n    /**\n     \\class\tIDataStorage\n    \n     \\brief\tOne of the key classes to organize between communications processors. \n\t\t\tBasically is a key-value storage. \n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class IDataStorage : public IInterface\n    {\n        public:\n            typedef std::shared_ptr<IDataStorage> Ptr;\n\n            /**\n             \\fn\tvirtual void IDataStorage::Set(std::string key, IDataObject::Ptr object) = 0;\n            \n             \\brief\tAdd object into storage with key\n            \n             \\author\tSergey\n             \\date\t8/11/2015\n            \n             \\param\tkey   \tKey\n             \\param\tobject\tObject to add. The object should be contained in IDataObject\n             */\n\n            virtual void Set(std::string key, IDataObject::Ptr object) = 0;\n            virtual IDataObject::Ptr Get(std::string key) = 0;\n            virtual void Remove(std::string key) = 0;\n\n            template<class T>\n            std::shared_ptr<T> GetAndCast(std::string key)\n            {\n                IDataObject::Ptr obj = Get(key);\n                return std::dynamic_pointer_cast<T>(obj);\n            }\n\n            template<class T>\n            std::shared_ptr<T> GetAndCastNotNull(std::string key, std::string message = \"Got nullptr from storage\")\n            {\n                std::shared_ptr<T> ptr = GetAndCast<T>(key);\n                if (ptr.get() == nullptr)\n                {\n                    throw HPEException(message);\n                }\n                return ptr;\n            }\n\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/LandmarksObject.h",
    "content": "#pragma once\n\n#ifndef LANDMARKSOBJECT_H\n#define LANDMARKSOBJECT_H\n\n#include <DataObject/IDataObject.h>\n\n#include <Landmarks.h>\n\nnamespace hpe\n{\n    template <class PointType>\n    class LandmarksObject : public IDataObject\n    {\n        public:\n            typedef std::shared_ptr<LandmarksObject> Ptr;\n\n            typename Common<PointType>::Landmarks landmarks;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/MapDataStorage.cpp",
    "content": "#include \"MapDataStorage.h\"\n\nnamespace hpe\n{\n    void MapDataStorage::Set(std::string key, IDataObject::Ptr object)\n    {\n        m_storage[key] = object;\n    }\n\n    IDataObject::Ptr MapDataStorage::Get(std::string key)\n    {\n        return m_storage[key];\n    }\n\n    void MapDataStorage::Remove(std::string key)\n    {\n        auto it = m_storage.find(key);\n        if (it != m_storage.end())\n        {\n            m_storage.erase(it);\n        }\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/MapDataStorage.h",
    "content": "#pragma once\n\n#ifndef MAPDATASTORAGE_H\n#define MAPDATASTORAGE_H\n\n#include <DataObject/IDataStorage.h>\n#include <map>\n\nnamespace hpe\n{\n\n    class MapDataStorage : public IDataStorage\n    {\n        public:\n            void Set(std::string key, IDataObject::Ptr object);\n            IDataObject::Ptr Get(std::string key);\n            void Remove(std::string key);\n        private:\n            std::map<std::string, IDataObject::Ptr> m_storage;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/DataObject/RawFrames.h",
    "content": "#pragma once\n\n#ifndef RAWFRAMES_H\n#define RAWFRAMES_H\n\n#include <DataObject/IDataObject.h>\n\n#include <opencv2/opencv.hpp>\n\nnamespace hpe\n{\n    class RawFrames : public IDataObject\n    {\n        public:\n            typedef std::shared_ptr<RawFrames> Ptr;\n\n            cv::Mat colorFrame;\n            cv::Mat depthFrame;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/DepthDetector.cpp",
    "content": "#include \"DepthDetector.h\"\n\n#include <boost/math/special_functions/fpclassify.hpp>\n\nnamespace hpe\n{\n\n    DepthDetector::DepthDetector(std::string dataFolder)\n    {\n        Init(dataFolder);\n    }\n\n    DepthDetector::~DepthDetector(void)\n    {\n    }\n\n    hpe::headPoseInfo DepthDetector::Detect(cv::Mat depthFrame)\n    {\n        cv::Mat converted(depthFrame.rows, depthFrame.cols, depthFrame.type());\n\n        for (int y = 0; y < depthFrame.rows; ++y)\n        {\n            for (int x = 0; x < depthFrame.cols; ++x)\n            {\n                float depthValue = depthFrame.at<float>(y, x);\n\n                if (depthValue > 1.5 || depthValue == 0.0f || boost::math::isnan(depthValue))\n                {\n                    depthValue = 0;\n                }\n                else\n                {\n                    depthValue = depthValue * 1000;\n                }\n\n                converted.at<float>(y, x) = depthValue;\n            }\n        }\n\n        headPoseInfo info = estimateHeadAngles(converted, m_cascade, m_paramList);\n        return info;\n    }\n\n    void DepthDetector::Init(std::string dataFolder)\n    {\n        std::string configFileName = dataFolder + \"/config.txt\";\n        loadConfig(configFileName, &m_paramList);\n\n        std::string modelsDir = dataFolder + \"/models/\";\n        for (int i = 0; i < 5; i++)\n        {\n            m_cascade.push_back(readTree(modelsDir, i + 1));\n        }\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/DepthDetector.h",
    "content": "#pragma once\n\n#ifndef DEPTHDETECTOR_H\n#define DEPTHDETECTOR_H\n\n#include <opencv2/opencv.hpp>\n\n#include \"localFunctions.h\"\n#include \"Tree.h\"\n\nnamespace hpe\n{\n    class DepthDetector\n    {\n        public:\n            DepthDetector(std::string dataFolder);\n            ~DepthDetector(void);\n\n            headPoseInfo Detect(cv::Mat depthFrame);\n        private:\n            void Init(std::string dataFolder);\n\n            paramList m_paramList;\n            std::vector<Tree> m_cascade;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/LeafNode.cpp",
    "content": "#include <cmath>\n#include <algorithm>\n#include <vector>\n#include <iostream>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#include \"LeafNode.h\"\n\nusing namespace cv;\nusing namespace std;\nusing std::vector;\n\nnamespace hpe\n{\n\n\n    void evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi, LeafNode *leaf)\n    {\n\n        int leafDim = treeMat.cols;//(int*) mxGetDimensions(input[2]);\n        leaf->mean.create(1, 6, CV_64FC1);\n        int node = initDepth;\n        double pnode = (int)treeMat.at<double>(node, 1);\n        int incr = 0;\n\n        int xa1, ya1, xa2, ya2, xb1, yb1, xb2, yb2;\n\n        while (pnode == -1) {\n            xa1 = roi.x + (int)treeMat.at<double>(node, 2);\n            ya1 = roi.y + (int)treeMat.at<double>(node, 3);\n            xa2 = xa1 + (int)treeMat.at<double>(node, 6);\n            ya2 = ya1 + (int)treeMat.at<double>(node, 7);\n            xb1 = roi.x + (int)treeMat.at<double>(node, 4);\n            yb1 = roi.y + (int)treeMat.at<double>(node, 5);\n            xb2 = xb1 + (int)treeMat.at<double>(node, 8);\n            yb2 = yb1 + (int)treeMat.at<double>(node, 9);\n\n            double numrA = patchInt.at<double>(ya2, xa2) - patchInt.at<double>(ya2, xa1) - patchInt.at<double>(ya1, xa2) + patchInt.at<double>(ya1, xa1);\n            double numA = maskInt.at<double>(ya2, xa2) - maskInt.at<double>(ya2, xa1) - maskInt.at<double>(ya1, xa2) + maskInt.at<double>(ya1, xa1);\n\n            double numrB = patchInt.at<double>(yb2, xb2) - patchInt.at<double>(yb2, xb1) - patchInt.at<double>(yb1, xb2) + patchInt.at<double>(yb1, xb1);\n            double numB = maskInt.at<double>(yb2, xb2) - maskInt.at<double>(yb2, xb1) - maskInt.at<double>(yb1, xb2) + maskInt.at<double>(yb1, xb1);\n\n            double mxA = (numA > 1) ? numA : 1;\n            double mxB = (numB > 1) ? numB : 1;\n\n            double testVal = numrA / mxA - numrB / mxB;\n            int test = (testVal >= treeMat.at<double>(node, 11)) ? 1 : 0;\n\n            node = (int)treeMat.at<double>(node, 0) - 1;\n            incr = node + test + 1;\n            node = (int)(treeIdxMap.at<double>(node + incr, 0) - 1);\n            pnode = (int)treeMat.at<double>(node, 1);\n        }\n\n        /*for(int i=0; i< leafDim; i++)\n        {\n            *out = treeMat.at<double>(node, i);\n            out++;\n        }*/\n\n        leaf->pfg = treeMat.at<double>(node, 2);\n        leaf->trace = treeMat.at<double>(node, 3);\n        //cout << treeMat.at<double>(node, 4) << endl;\n        leaf->mean = (Mat_<double>(1, 6) << treeMat.at<double>(node, 4),\n                      treeMat.at<double>(node, 5),\n                      treeMat.at<double>(node, 6),\n                      treeMat.at<double>(node, 7),\n                      treeMat.at<double>(node, 8),\n                      treeMat.at<double>(node, 9));\n        /*leaf->mean.at<double>(0) = treeMat.at<double>(node, 4);\n        leaf->mean.at<double>(1) = treeMat.at<double>(node, 5);\n        leaf->mean.at<double>(2) = treeMat.at<double>(node, 6);\n        leaf->mean.at<double>(3) = treeMat.at<double>(node, 7);\n        leaf->mean.at<double>(4) = treeMat.at<double>(node, 8);\n        leaf->mean.at<double>(5) = treeMat.at<double>(node, 9);*/\n\n        //return leafPtr;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/LeafNode.h",
    "content": "#ifndef LEAFNODE_H\n#define LEAFNODE_H\n\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\nusing namespace cv;\nusing namespace std;\n\nnamespace hpe\n{\n\n    class LeafNode {\n\n        public:\n\n            LeafNode() { }\n            ~LeafNode() {\n                mean.release();\n            }\n\n            // Probability of belonging to head region\n            double pfg;\n            // mean vector\n            cv::Mat mean;\n            // trace of the covariance matrix\n            double trace;\n\n    };\n\n//LeafNode* evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi);\n    void evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi, LeafNode *leaf);\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/Tree.cpp",
    "content": "#include <cmath>\n#include <algorithm>\n#include <vector>\n#include <iostream>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#include \"Tree.h\"\n#include \"csvMat.h\"\n\nusing namespace cv;\nusing namespace std;\nusing std::vector;\n\n#include <fstream>\n\nnamespace hpe\n{\n\n    Tree readTree(string csvRootName, int treeIdx)\n    {\n        cout << \"Reading tree no. \" << treeIdx << \" from: \" << csvRootName << endl;\n        string treeIdxStr;\n        ostringstream convert;\n        convert << treeIdx;\n        treeIdxStr = convert.str();\n        string specsFileName = csvRootName + \"tree00\" + treeIdxStr + \"_specs.csv\";\n        Mat specsMat;\n        specsMat.create(1, 6, CV_64FC1);\n        specsMat = readCSVMat(specsFileName, 1, 6);\n\n        Tree t;\n        t.treeMat.create(int(specsMat.at<double>(0)), int(specsMat.at<double>(1)), CV_64FC1);\n        t.idxMap.create(int(specsMat.at<double>(2)), 1, CV_64FC1);\n\n        string matFileName = csvRootName + \"tree00\" + treeIdxStr + \"_Mat.csv\";\n        string idxMapFileName = csvRootName + \"tree00\" + treeIdxStr + \"_IdxMap.csv\";\n\n        t.treeMat = readCSVMat(matFileName, int(specsMat.at<double>(0)), int(specsMat.at<double>(1)));\n        t.idxMap = readCSVMat(idxMapFileName, int(specsMat.at<double>(2)), 1);\n\n        return t;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/Tree.h",
    "content": "#pragma once\n#ifndef TREE_H\n#define TREE_H\n\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\nusing namespace cv;\nusing namespace std;\n\nnamespace hpe\n{\n\n    class Tree {\n\n        public:\n\n            Tree() { }\n            ~Tree() { }\n\n            Mat treeMat, idxMap;\n    };\n\n    Tree readTree(string csvRootName, int treeIdx);\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/csvMat.cpp",
    "content": "#include <cmath>\n#include <algorithm>\n#include <vector>\n#include <iostream>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#include \"csvMat.h\"\n\nusing namespace cv;\nusing namespace std;\nusing std::vector;\n\n#include <fstream>\n\nnamespace hpe\n{\n\n    Mat readCSVMat(string csvFrameName, int lin, int col)\n    {\n        Mat output(lin, col, CV_64FC1);\n        ifstream inFile(csvFrameName);\n        string line;\n        int linenum = 0;\n        while (getline(inFile, line))\n        {\n            linenum++;\n            //cout << \"\\nLine #\" << linenum << \":\" << endl;\n            istringstream linestream(line);\n            string item;\n            int itemnum = 0;\n            while (getline(linestream, item, ','))\n            {\n                itemnum++;\n                //cout << \"Item #\" << itemnum << \": \" << item << endl;\n                double temp = (double)atof(item.c_str());\n                output.at<double>(linenum - 1, itemnum - 1) = (double)temp;\n            }\n        }\n        return output;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/csvMat.h",
    "content": "#pragma once\n#ifndef CSVMAT_H\n#define CSVMAT_H\n\n#include <string>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <stdint.h>\n#include \"opencv2/core/core.hpp\"\n\nusing namespace std;\nusing namespace cv;\n\nnamespace hpe\n{\n\n    Mat readCSVMat(string csvFrameName, int lin, int col);\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/localFunctions.cpp",
    "content": "#include <string>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <stdint.h>\n#include <opencv2/opencv.hpp>\n#include \"localFunctions.h\"\n#include <fstream>\n#include <opencv2/imgproc/imgproc.hpp>\n#include \"Tree.h\"\n#include \"LeafNode.h\"\n\nusing namespace std;\nusing namespace cv;\n\nnamespace hpe\n{\n\n    void loadConfig(std::string filename, paramList *p, bool verbose)\n    {\n        ifstream in(filename);\n        string dummy;\n\n        if (in.is_open()) {\n\n            in >> dummy;\n            in >> p->maskThreshold;\n\n            in >> dummy;\n            in >> p->stride;\n\n            in >> dummy;\n            in >> p->patchSize;\n\n            in >> dummy;\n            in >> p->headPatchProb;\n\n            in >> dummy;\n            in >> p->headDiameter;\n\n            in >> dummy;\n            in >> p->headPatchDistTh;\n\n            in >> dummy;\n            in >> p->minVotes;\n\n            in >> dummy;\n            in >> p->yawTrace;\n\n            in >> dummy;\n            in >> p->tiltTrace;\n\n            in >> dummy;\n            in >> p->ytDiameter;\n\n        }\n        else {\n            cerr << \"File not found \" << filename << endl;\n        }\n        in.close();\n\n        if (verbose)\n        {\n            cout << endl << \"------------------------------------\" << endl;\n            cout << \"Parameter List:       \" << endl << endl;\n            cout << \"maskThreshold:            \" << p->maskThreshold << endl;\n            cout << \"stride:\t\t\t  \" << p->stride << endl;\n            cout << \"patchSize:                \" << p->patchSize << endl;\n            cout << \"headPatchProb:            \" << p->headPatchProb << endl;\n            cout << \"headDiameter:             \" << p->headDiameter << endl;\n            cout << \"headPatchDistTh:          \" << p->headPatchDistTh << endl;\n            cout << \"minNoVotes:               \" << p->minVotes << endl;\n            cout << \"yawTrace:                 \" << p->yawTrace << endl;\n            cout << \"tiltTrace:                \" << p->tiltTrace << endl;\n            cout << \"yawTiltDiameter:          \" << p->ytDiameter << endl;\n\n            cout << endl << \"------------------------------------\" << endl << endl;\n        }\n    }\n\n    Rect getBoundingBox(const Mat &im3D, paramList p) {\n\n        Rect bbox;\n        int min_x = im3D.cols;\n        int min_y = im3D.rows;\n        int max_x = 0;\n        int max_y = 0;\n        int p_width = p.patchSize;\n        int p_height = p.patchSize;\n\n        for (int y = 0; y < im3D.rows; y++)\n        {\n            const double *Mi = im3D.ptr<double>(y);\n            for (int x = 0; x < im3D.cols; x++) {\n\n                if (Mi[x] > 0) {\n\n                    min_x = MIN(min_x, x);\n                    min_y = MIN(min_y, y);\n                    max_x = MAX(max_x, x);\n                    max_y = MAX(max_y, y);\n                }\n\n            }\n        }\n\n        int new_w = max_x - min_x;\n        int new_h = max_y - min_y;\n\n        bbox.x = MIN(im3D.cols - 1, MAX(0 , min_x));\n        bbox.y = MIN(im3D.rows - 1, MAX(0 , min_y));\n\n        //int new_w = max_x - min_x + p_width;\n        //int new_h = max_y - min_y + p_height;\n\n        //bbox.x = MIN( im3D.cols-1, MAX( 0 , min_x - p_width/2 ) );\n        //bbox.y = MIN( im3D.rows-1, MAX( 0 , min_y - p_height/2) );\n\n        bbox.width  = MAX(0, MIN(new_w, im3D.cols - bbox.x));\n        bbox.height = MAX(0, MIN(new_h, im3D.rows - bbox.y));\n\n        return bbox;\n    }\n\n    double computeNorm(vector<double> vec1, vector<double> vec2)\n    {\n        double result = 0;\n        int dim = vec1.size();\n        vector<double> diff(dim);\n        double sumSq = 0;\n        for (int i = 0; i < dim; i++)\n        {\n            diff[i] = vec1[i] - vec2[i];\n            sumSq += pow(diff[i], 2);\n        }\n        result = sqrt(sumSq);\n        return result;\n    }\n\n    double computeMean(vector<double> vec)\n    {\n        double sum = 0;\n        for (int k = 0; k < vec.size(); k++)\n        {\n            sum += vec[k];\n        }\n        return sum / vec.size();\n    }\n\n    Mat meanShift(Mat dataPts, double bandWidth, int *maxClustPointer)\n    {\n        *maxClustPointer = -1;\n        int biggestClusterSize = 0;\n        int numDim = dataPts.cols;\n        int numPts = dataPts.rows;\n        int numClust = -1;\n        double bandSq = bandWidth * bandWidth;\n        std::vector<int> initPtInds(numPts);\n        for (int k = 0; k < initPtInds.size(); k++)\n            initPtInds[k] = k;\n\n        double stopThresh = 1e-3 * bandWidth;\n\n        Mat clustCent(0, numDim, CV_64FC1);\n\n        vector<bool> beenVisitedFlag(numPts);\n        int numInitPts = numPts;\n        //vector<int> clusterVotes(numPts);\n\n        while (numInitPts > 0)\n        {\n            int tempInd = (int)std::ceil((float)(numInitPts - 1) * std::rand() / RAND_MAX);\n            double stInd = (double)initPtInds[tempInd];\n\n            std::vector<double> myMean(numDim);\n            for (int i = 0; i < numDim; i++)\n            {\n                myMean[i] = (double)dataPts.at<double>(stInd, i);\n            }\n\n            vector<int> thisClusterVotes(numPts);\n\n            while (1)\n            {\n                vector<double> sqDistToAll(numPts);\n                for (int i = 0; i < numPts; i++)\n                {\n                    double result = 0;\n                    for (int j = 0; j < numDim; j++)\n                    {\n                        double diff = (double)dataPts.at<double>(i, j) - myMean[j];\n                        result += pow(diff, 2);\n                    }\n                    sqDistToAll[i] = result;\n                }\n                vector<bool> inInds(numPts);\n                for (int i = 0; i < numPts; i++)\n                {\n                    inInds[i] = sqDistToAll[i] < bandSq;\n                    if (inInds[i] == true)\n                    {\n                        thisClusterVotes[i]++;\n                    }\n                }\n\n                int countTrues = count_if(inInds.begin(), inInds.end(), [](bool b) {\n                    return b == true;\n                });\n                vector<double> myOldMean = myMean;\n                cv::Mat tempData(countTrues, numDim, CV_64FC1);\n                int localIdx = 0;\n                for (int i = 0; i < inInds.size(); i++)\n                {\n                    if (inInds[i] == true)\n                    {\n                        for (int j = 0; j < numDim; j++)\n                        {\n                            tempData.at<double>(localIdx, j) = (double)dataPts.at<double>(i, j);\n                        }\n                        localIdx++;\n                        beenVisitedFlag[i] = true;\n                    }\n                }\n                Mat myMeanMat;\n\t\t\t\tcv::reduce(tempData, myMeanMat, 0, CV_REDUCE_AVG);\n                myMeanMat.copyTo(myMean);\n\n                // compute the 2-norm of the difference-between-means vector\n                double meanNorm = computeNorm(myOldMean, myMean);\n\n                if (meanNorm < stopThresh)\n                {\n                    int mergeWith = -1;\n                    for (int cn = 0; cn <= numClust; cn++)\n                    {\n                        vector<double> localClustCent(numDim);\n                        clustCent.row(cn).copyTo(localClustCent);\n                        double distToOther = computeNorm(myMean, localClustCent);\n\n                        if (distToOther < bandWidth / 2)\n                        {\n                            mergeWith = cn;\n                            break;\n                        }\n                    }\n\n                    if (mergeWith >= 0)\n                    {\n                        Mat localMat = 0.5 * (myMeanMat + clustCent.row(mergeWith));\n                        localMat.copyTo(clustCent.row(mergeWith));\n                    }\n                    else\n                    {\n                        numClust++;\n                        clustCent.push_back(myMeanMat);\n                        if (tempData.rows > biggestClusterSize)\n                        {\n                            biggestClusterSize = tempData.rows;\n                            *maxClustPointer = *maxClustPointer + 1;\n                        }\n                        //myMeanMat.copyTo(clustCent.row(numClust));\n                    }\n                    break;\n                }\n            }\n\n            initPtInds.clear();\n            for (int i = 0; i < beenVisitedFlag.size(); i++)\n            {\n                if (beenVisitedFlag[i] == false)\n                {\n                    initPtInds.push_back(i);\n                }\n            }\n            numInitPts = initPtInds.size();\n        }\n        return clustCent;\n    }\n\n    headPoseInfo estimateHeadAngles(Mat depthFrame, vector<Tree> cascade, paramList p)\n    {\n        headPoseInfo eAngles;\n        eAngles.x = 0;\n        eAngles.y = 0;\n        eAngles.tilt = -100;\n        eAngles.yaw = -100;\n\n        int p_width = p.patchSize;\n        int p_height = p.patchSize;\n        Rect bbox = getBoundingBox(depthFrame, p);\n\n        //integral image of the depth frame\n        int rows = depthFrame.rows;\n        int cols = depthFrame.cols;\n        Mat depthInt(rows + 1, cols + 1, CV_64FC1);\n        integral(depthFrame, depthInt);\n\n        //mask\n        depthFrame.convertTo(depthFrame, CV_32FC1);\n        Mat mask(rows, cols, CV_32FC1);\n        mask.setTo(0);\n        cv::threshold(depthFrame, mask, p.maskThreshold, 1, THRESH_BINARY);\n\n        //integral image of the mask\n        Mat maskInt(rows + 1, cols + 1, CV_64FC1);\n        maskInt.setTo(0);\n        integral(mask, maskInt);\n\n        //defines the test patch\n        Rect roi = Rect(0, 0, p_width, p_height);\n\n        int min_no_pixels = p_width * p_height / 10;\n\n        //vector<LeafNode*> leaves;\n        LeafNode bufferLeaf;\n\n        //process each patch\n        int startX = std::max(2, bbox.x + 1 - p_width / 2 + 1);\n        int endX = std::min(bbox.x + bbox.width - p_width / 2 + 1, maskInt.cols - p_width);\n        int startY = std::max(2, bbox.y + 1 - p_height / 2 + 1);\n        int endY = std::min(bbox.y + bbox.height - p_height / 2 + 1, maskInt.rows - p_height);\n        vector<double> headPatchIdxX, headPatchIdxY;\n\n        for (roi.y = startY; roi.y <= endY; roi.y += p.stride)\n        {\n            for (roi.x = startX; roi.x <= endX; roi.x += p.stride)\n            {\n                int x0 = roi.x;\n                int y0 = roi.y;\n                int x1 = x0 + p_width - 1;\n                int y1 = y0 + p_height - 1;\n\n                //discard if the patch is filled with data for less than 10%\n                double areaSum = maskInt.at<double>(y1, x1) - maskInt.at<double>(y1, x0 - 1) - maskInt.at<double>(y0 - 1, x1) + maskInt.at<double>(y0, x0);\n                if (areaSum / (p_width * p_height) <= 0.1)\n                    continue;\n\n                evaluatePackedTree(depthInt, maskInt, cascade[0].treeMat, cascade[0].idxMap, 0, roi, &bufferLeaf);\n\n                // store only patches that are almost certain to belong to the head\n                if (bufferLeaf.pfg > p.headPatchProb)\n                {\n                    headPatchIdxX.push_back((double)x0);\n                    headPatchIdxY.push_back((double)y0);\n                }\n            }\n        }\n\n        if (headPatchIdxX.size() > 0)\n        {\n            Mat dataPtsX(headPatchIdxX);\n            Mat dataPtsY(headPatchIdxY);\n            Mat dataPts;\n            hconcat(dataPtsX, dataPtsY, dataPts);\n\n            int maxClustIdx = -1; // initialize the index of the biggest cluster (in case there are more than 1, it is useful to know which is the biggest)\n            Mat clustCent = meanShift(dataPts, p.headDiameter, &maxClustIdx);\n            vector<double> clustCentVec(dataPts.cols);\n            clustCent.row(maxClustIdx).copyTo(clustCentVec);\n\n            // once the center of the patches that \"think\" they belong to the head has been found\n            // it's time to call the X and Y offset trees that will vote for the head center\n            // and the Yaw/Tilt trees for the yaw/tilt predictions\n\n            Mat votes(0, 8, CV_64FC1); // offsetX, offsetY, offsetZ(not used), yaw, tilt, roll(not used), traceYaw, traceTilt\n            Mat dummyVote(1, 8, CV_64FC1);\n            Mat headPatchDataMat(0, 2, CV_64FC1);\n            Mat headPatchCenterAux(1, 2, CV_64FC1);\n\n            for (roi.y = startY; roi.y <= endY; roi.y += p.stride)\n            {\n                for (roi.x = startX; roi.x <= endX; roi.x += p.stride)\n                {\n                    vector<double> currPoint(clustCentVec.size());\n                    currPoint[0] = roi.x;\n                    currPoint[1] = roi.y;\n\n                    int x0 = roi.x;\n                    int y0 = roi.y;\n                    int x1 = x0 + p_width - 1;\n                    int y1 = y0 + p_height - 1;\n\n                    double areaSum = maskInt.at<double>(y1, x1) - maskInt.at<double>(y1, x0 - 1) - maskInt.at<double>(y0 - 1, x1) + maskInt.at<double>(y0, x0);\n                    double localNorm = computeNorm(clustCentVec, currPoint);\n\n                    if ((areaSum / (p_width * p_height) > 0.1) & (localNorm <= p.headPatchDistTh))\n                    {\n                        headPatchCenterAux.at<double>(0, 0) = (double)(x0 + p.patchSize / 2);\n                        headPatchCenterAux.at<double>(0, 1) = (double)(y0 + p.patchSize / 2);\n                        headPatchDataMat.push_back(headPatchCenterAux);\n                        // offset on X\n                        evaluatePackedTree(depthInt, maskInt, cascade[1].treeMat, cascade[1].idxMap, 0, roi, &bufferLeaf);\n                        dummyVote.at<double>(0) = x0 + p.patchSize / 2 - bufferLeaf.mean.at<double>(0);\n                        // offset on Y\n                        evaluatePackedTree(depthInt, maskInt, cascade[2].treeMat, cascade[2].idxMap, 0, roi, &bufferLeaf);\n                        dummyVote.at<double>(1) = y0 + p.patchSize / 2 - bufferLeaf.mean.at<double>(1);\n                        dummyVote.at<double>(2) = 0;\n                        // Yaw\n                        evaluatePackedTree(depthInt, maskInt, cascade[3].treeMat, cascade[3].idxMap, 0, roi, &bufferLeaf);\n                        dummyVote.at<double>(3) = bufferLeaf.mean.at<double>(3);\n                        dummyVote.at<double>(5) = 0;\n                        dummyVote.at<double>(6) = bufferLeaf.trace;\n                        // Tilt\n                        evaluatePackedTree(depthInt, maskInt, cascade[4].treeMat, cascade[4].idxMap, 0, roi, &bufferLeaf);\n                        dummyVote.at<double>(4) = bufferLeaf.mean.at<double>(4);\n                        dummyVote.at<double>(7) = bufferLeaf.trace;\n                        votes.push_back(dummyVote);\n                    }\n                }\n            }\n\n            vector<double> mnHC(2);\n            mnHC[0] = 0;\n            mnHC[1] = 0;\n\n            Mat votesFin(8, votes.rows, CV_64FC1);\n            transpose(votes, votesFin);\n\n            vector<double> offsetXVec(votesFin.cols), offsetYVec(votesFin.cols), predYawVec(votesFin.cols);\n            vector<double> predTiltVec(votesFin.cols), traceYawVec(votesFin.cols), traceTiltVec(votesFin.cols);\n            votesFin.row(0).copyTo(offsetXVec);\n            votesFin.row(1).copyTo(offsetYVec);\n            votesFin.row(3).copyTo(predYawVec);\n            votesFin.row(4).copyTo(predTiltVec);\n            votesFin.row(6).copyTo(traceYawVec);\n            votesFin.row(7).copyTo(traceTiltVec);\n\n            if (offsetXVec.size() > 0)\n            {\n                mnHC[0] = computeMean(offsetXVec);\n                mnHC[1] = computeMean(offsetYVec);\n                eAngles.x = mnHC[0];\n                eAngles.y = mnHC[1];\n\n                vector<double> distHeadCenter(headPatchDataMat.rows);\n                vector<double> distHeadCenterIdx(headPatchDataMat.rows);\n                vector<double> headPatchDataLocal(2);\n                for (int k = 0; k < headPatchDataMat.rows; k++)\n                {\n                    headPatchDataMat.row(k).copyTo(headPatchDataLocal);\n                    distHeadCenter[k] = computeNorm(headPatchDataLocal, mnHC);\n                    distHeadCenterIdx[k] = k;\n                }\n                std::sort(distHeadCenterIdx.begin(), distHeadCenterIdx.end(), [&distHeadCenter](size_t i1, size_t i2) {\n                    return distHeadCenter[i1] < distHeadCenter[i2];\n                });\n                vector<int> headCenterInd(offsetXVec.size());\n                headCenterInd.assign(offsetXVec.size(), 0);\n                if (offsetXVec.size() > p.minVotes)\n                {\n                    for (int k = 0; k < p.minVotes; k++)\n                    {\n                        headCenterInd[(int)distHeadCenterIdx[k]] = 1;\n                    }\n                }\n                else\n                {\n                    headCenterInd.assign(offsetXVec.size(), 1);\n                }\n\n                vector<int> traceInd(offsetXVec.size());\n                traceInd.assign(offsetXVec.size(), 0);\n\n                for (int k = 0; k < offsetXVec.size(); k++)\n                {\n                    if ((traceYawVec[k] + traceTiltVec[k]) / 2 < p.yawTrace)\n                    {\n                        traceInd[k] = 1;\n                    }\n                }\n\n                vector<int> allInds(traceInd.size());\n                Mat yawTiltPoints(0, 2, CV_64FC1);\n                Mat yawTiltDummy(1, 2, CV_64FC1);\n\n                for (int k = 0; k < traceInd.size(); k++)\n                {\n                    allInds[k] = traceInd[k] * headCenterInd[k];\n                    if (allInds[k] == 1)\n                    {\n                        yawTiltDummy.at<double>(0) = predYawVec[k];\n                        yawTiltDummy.at<double>(1) = predTiltVec[k];\n                        yawTiltPoints.push_back(yawTiltDummy);\n                    }\n                }\n                //std::transform(headCenterInd.begin() + 1, headCenterInd.end(), traceInd.begin()+ 1, std::multiplies<int>());\n\n                int maxClustIdx = -1;\n                Mat clustCent = meanShift(yawTiltPoints, p.ytDiameter, &maxClustIdx);\n                if (maxClustIdx != -1 && maxClustIdx < clustCent.rows)\n                {\n                    eAngles.yaw = clustCent.at<double>(maxClustIdx, 0);\n                    eAngles.tilt = clustCent.at<double>(maxClustIdx, 1);\n                }\n\n                //clustCent.row(maxClustIdx).copyTo(eAngles);\n            }\n        }\n\n        return eAngles;\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/localFunctions.h",
    "content": "#ifndef LOCALFUNCTIONS_H\n#define LOCALFUNCTIONS_H\n\n#include <string>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <stdint.h>\n#include \"opencv2/core/core.hpp\"\n#include \"Tree.h\"\n\nusing namespace std;\nusing namespace cv;\n\nnamespace hpe\n{\n\n    struct paramList\n    {\n        double maskThreshold;\n        int stride;\n        int patchSize;\n        double headPatchProb;\n        double headDiameter;\n        double headPatchDistTh;\n        double minVotes;\n        double yawTrace;\n        double tiltTrace;\n        double ytDiameter;\n    };\n\n    struct headPoseInfo\n    {\n        double x, y, yaw, tilt;\n    };\n\n    void loadConfig(std::string filename, paramList *p, bool verbose = false);\n    Rect getBoundingBox(const Mat &im3D, paramList p);\n    double computeNorm(vector<double> vec1, vector<double> vec2);\n    double computeMean(vector<double> vec);\n    headPoseInfo estimateHeadAngles(Mat depthFrame, vector<Tree> cascade, paramList p);\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Exception/HPEException.cpp",
    "content": "#include \"HPEException.h\"\n\nHPEException::HPEException(std::string message)\n    : m_message(message)\n{\n\n}\n\nconst char *HPEException::what()\n{\n    return m_message.c_str();\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Exception/HPEException.h",
    "content": "#ifndef HPEEXCEPTION_H\n#define HPEEXCEPTION_H\n\n#include <stdexcept>\n\nclass HPEException : public std::exception\n{\n    public:\n        HPEException(std::string message);\n        virtual const char *what();\n\n    private:\n        std::string m_message;\n};\n\n#endif // HPEEXCEPTION_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/FacialExpressions/ferLocalFunctions.cpp",
    "content": "#include <string>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <stdint.h>\n#include \"opencv2/core/core.hpp\"\n#include \"ferLocalFunctions.h\"\n#include <fstream>\n#include <opencv2/imgproc/imgproc.hpp>\n#include <opencv2/opencv.hpp>\n#include <fstream>\n#include <Windows.h>\n#include <map>\n#include <boost/filesystem.hpp>\n\nnamespace fer\n{\n\n    using namespace std;\n    using namespace cv;\n\n    std::string to_string(int val)\n    {\n        std::stringstream str;\n        str << val;\n        return str.str();\n    }\n\n    vector<string> get_all_files_names_within_folder(string folder)\n    {\n        vector<string> names;\n\n        namespace fs = boost::filesystem;\n        fs::directory_iterator end_iter;\n        if (fs::exists(folder) && fs::is_directory(folder))\n        {\n            for (fs::directory_iterator dir_iter(folder) ; dir_iter != end_iter ; ++dir_iter)\n            {\n                if (fs::is_regular_file(dir_iter->status()))\n                {\n                }\n            }\n        }\n\n        return names;\n    }\n\n\n    Mat RV_readCSVMat(string csvFrameName, int lin, int col)\n    {\n        Mat output(lin, col, CV_64FC1);\n        ifstream inFile(csvFrameName);\n        string line;\n        int linenum = 0;\n        while (getline(inFile, line))\n        {\n            linenum++;\n            //cout << \"\\nLine #\" << linenum << \":\" << endl;\n            istringstream linestream(line);\n            string item;\n            int itemnum = 0;\n            while (getline(linestream, item, ','))\n            {\n                itemnum++;\n                //cout << \"Item #\" << itemnum << \": \" << item << endl;\n                double temp = (double)atof(item.c_str());\n                output.at<double>(linenum - 1, itemnum - 1) = (double)temp;\n            }\n        }\n        return output;\n    }\n\n    void RV_readParamList(const string foldername, paramList *p)\n    {\n        cout << \"Reading params from: \" << foldername << \"...\";\n        ifstream in(foldername + \"config.txt\");\n        string dummy;\n\n        if (in.is_open()) {\n\n            in >> dummy;\n            in >> p->d;\n\n            in >> dummy;\n            in >> p->patchSize;\n\n            in >> dummy;\n            in >> p->stride;\n\n            in >> dummy;\n            in >> p->featSize;\n\n            in >> dummy;\n            in >> p->noPatches;\n\n            in >> dummy;\n            in >> p->noChannels;\n\n            in >> dummy;\n            in >> p->noTrees;\n\n        }\n        else {\n            cerr << \"File not found \" << foldername + \"config.txt\" << endl;\n            exit(-1);\n        }\n        in.close();\n\n        Mat dummyMat(p->d, 16, CV_64FC1);\n        dummyMat.setTo(0);\n        for (int i = 0; i < p->noPatches; i++)\n        {\n            p->patchFeats.push_back(dummyMat);\n        }\n\n        Mat paramsPatchCoords = RV_readCSVMat(foldername + \"paramsPatchCoords.csv\", p->noPatches, 2);\n        p->patchCoords = paramsPatchCoords;\n\n        for (int i = 0; i < p->noPatches; i++)\n        {\n            // rectangle coordinates are imported from Matlab as they are, which means they obey Matlab indexing (starting from 1)\n            // this will be taken care of explicitly at feature computation stage ...\n            Mat patchFeatsAux = RV_readCSVMat(foldername + \"patchFeats/patchFeats_\" + to_string(i + 1) + \".csv\", p->d, 16);\n            Mat patchFeatsAuxInt(patchFeatsAux.rows, patchFeatsAux.cols, CV_8UC1);\n            patchFeatsAux.convertTo(patchFeatsAuxInt, CV_8UC1);\n            p->patchFeats[i] = patchFeatsAuxInt;\n        }\n\n        cout << \"done!\" << endl;\n        cout << endl << \"------------------------------------\" << endl;\n        cout << \"Parameter List:\" << endl << endl;\n        cout << \"d:             \" << p->d << endl;\n        cout << \"patchSize:     \" << p->patchSize << endl;\n        cout << \"stride:        \" << p->stride << endl;\n        cout << \"featSize:      \" << p->featSize << endl;\n        cout << \"noPatches:     \" << p->noPatches << endl;\n        cout << \"noChannels:    \" << p->noChannels << endl;\n        cout << endl << \"------------------------------------\" << endl << endl;\n    }\n\n    vector<vector<randomTree>> RV_readAllForests(string forestDir, paramList params)\n    {\n        vector<vector<randomTree>> allForests(params.noPatches, vector<randomTree>(params.noTrees));\n        cout << \"Reading forests from: \" << forestDir << \"...\";\n        for (int i = 0; i < params.noPatches; i++)\n        {\n            for (int j = 0; j < params.noTrees; j++)\n            {\n                Mat dummySize = RV_readCSVMat(forestDir + \"patch_\" + to_string(i + 1) + \"_tree_\" + to_string(j + 1) + \"_size.csv\", 1, 1);\n                Mat dummyMat = RV_readCSVMat(forestDir + \"patch_\" + to_string(i + 1) + \"_tree_\" + to_string(j + 1) + \".csv\", 4, (int)dummySize.at<double>(0, 0));\n                allForests[i][j].cutVar.reserve(dummyMat.cols);\n                allForests[i][j].cutValue.reserve(dummyMat.cols);\n                allForests[i][j].rightChild.reserve(dummyMat.cols);\n                allForests[i][j].leafVal.reserve(dummyMat.cols);\n                dummyMat.row(0).copyTo(allForests[i][j].cutVar);\n                dummyMat.row(1).copyTo(allForests[i][j].cutValue);\n                dummyMat.row(2).copyTo(allForests[i][j].rightChild);\n                dummyMat.row(3).copyTo(allForests[i][j].leafVal);\n                //cout << \"patch: \" << i << \" tree: \" << j << endl;\n            }\n        }\n        cout << \"done!\" << endl;\n        return allForests;\n    }\n\n    vector<double> RV_computeHist(vector<double> vec, int noBins)\n    {\n        // supposedly vec has exactly noBins unique values\n        const double eps = 1.0 / vec.size();\n        vector<double> out(noBins);\n        out.assign(noBins, 0);\n        for (int i = 0; i < (int)vec.size(); i++)\n        {\n            out[(int)vec[i] - 1] += eps;\n        }\n        return out;\n    }\n\n    vector<double> RV_testForest(Mat localData, vector<randomTree> &localForest, paramList params, int numberOfClasses)\n    {\n        vector<double> votes(params.noTrees);\n        votes.assign(params.noTrees, 0);\n        for (int k = 0; k < params.noTrees; k++)\n        {\n            randomTree tree = localForest[k];\n            int currNode = 0;\n            while (tree.cutVar[currNode] > 0)\n            {\n                if (localData.at<double>(tree.cutVar[currNode] - 1) > tree.cutValue[currNode])\n                {\n                    currNode = tree.rightChild[currNode];\n                }\n                else\n                {\n                    currNode = tree.rightChild[currNode] - 1;\n                }\n            }\n            votes[k] = tree.leafVal[currNode];\n        }\n        vector<double> probs = RV_computeHist(votes, numberOfClasses);\n        return probs;\n    }\n\n\n    Rect RV_getBoundingBox(const Mat &im3D, paramList p) {\n\n        Rect bbox;\n        int min_x = im3D.cols;\n        int min_y = im3D.rows;\n        int max_x = 0;\n        int max_y = 0;\n        int p_width = p.patchSize;\n        int p_height = p.patchSize;\n\n        for (int y = 0; y < im3D.rows; y++)\n        {\n            const double *Mi = im3D.ptr<double>(y);\n            for (int x = 0; x < im3D.cols; x++) {\n\n                if (Mi[x] > 0) {\n\n                    min_x = MIN(min_x, x);\n                    min_y = MIN(min_y, y);\n                    max_x = MAX(max_x, x);\n                    max_y = MAX(max_y, y);\n                }\n\n            }\n        }\n\n        int new_w = max_x - min_x;\n        int new_h = max_y - min_y;\n\n        bbox.x = MIN(im3D.cols - 1, MAX(0 , min_x));\n        bbox.y = MIN(im3D.rows - 1, MAX(0 , min_y));\n\n        //int new_w = max_x - min_x + p_width;\n        //int new_h = max_y - min_y + p_height;\n\n        //bbox.x = MIN( im3D.cols-1, MAX( 0 , min_x - p_width/2 ) );\n        //bbox.y = MIN( im3D.rows-1, MAX( 0 , min_y - p_height/2) );\n\n        bbox.width  = MAX(0, MIN(new_w, im3D.cols - bbox.x));\n        bbox.height = MAX(0, MIN(new_h, im3D.rows - bbox.y));\n\n        return bbox;\n    }\n\n    double RV_computeNorm(vector<double> vec1, vector<double> vec2)\n    {\n        double result = 0;\n        int dim = vec1.size();\n        vector<double> diff(dim);\n        double sumSq = 0;\n        for (int i = 0; i < dim; i++)\n        {\n            diff[i] = vec1[i] - vec2[i];\n            sumSq += pow(diff[i], 2);\n        }\n        result = sqrt(sumSq);\n        return result;\n    }\n\n    double RV_computeMean(vector<double> vec)\n    {\n        double sum = 0;\n        for (int k = 0; k < (int)vec.size(); k++)\n        {\n            sum += vec[k];\n        }\n        return sum / vec.size();\n    }\n\n    Mat meanShift(Mat dataPts, double bandWidth, int *maxClustPointer)\n    {\n        *maxClustPointer = -1;\n        int biggestClusterSize = 0;\n        int numDim = dataPts.cols;\n        int numPts = dataPts.rows;\n        int numClust = -1;\n        double bandSq = bandWidth * bandWidth;\n        std::vector<int> initPtInds(numPts);\n        for (int k = 0; k < (int)initPtInds.size(); k++)\n            initPtInds[k] = k;\n\n        double stopThresh = 1e-3 * bandWidth;\n\n        Mat clustCent(0, numDim, CV_64FC1);\n\n        vector<bool> beenVisitedFlag(numPts);\n        int numInitPts = numPts;\n        //vector<int> clusterVotes(numPts);\n\n        while (numInitPts > 0)\n        {\n            int tempInd = (int)std::ceil((numInitPts - 1 - 1e-6) * std::rand() / RAND_MAX);\n            int stInd = initPtInds[tempInd];\n\n            std::vector<double> myMean(numDim);\n            for (int i = 0; i < numDim; i++)\n            {\n                myMean[i] = (double)dataPts.at<double>(stInd, i);\n            }\n\n            vector<int> thisClusterVotes(numPts);\n\n            while (1)\n            {\n                vector<double> sqDistToAll(numPts);\n                for (int i = 0; i < numPts; i++)\n                {\n                    double result = 0;\n                    for (int j = 0; j < numDim; j++)\n                    {\n                        double diff = (double)dataPts.at<double>(i, j) - myMean[j];\n                        result += pow(diff, 2);\n                    }\n                    sqDistToAll[i] = result;\n                }\n                vector<bool> inInds(numPts);\n                for (int i = 0; i < numPts; i++)\n                {\n                    inInds[i] = sqDistToAll[i] < bandSq;\n                    if (inInds[i] == true)\n                    {\n                        thisClusterVotes[i]++;\n                    }\n                }\n\n                int countTrues = count_if(inInds.begin(), inInds.end(), [](bool b) {\n                    return b == true;\n                });\n                vector<double> myOldMean = myMean;\n                cv::Mat tempData(countTrues, numDim, CV_64FC1);\n                int localIdx = 0;\n                for (int i = 0; i < (int)inInds.size(); i++)\n                {\n                    if (inInds[i] == true)\n                    {\n                        for (int j = 0; j < numDim; j++)\n                        {\n                            tempData.at<double>(localIdx, j) = (double)dataPts.at<double>(i, j);\n                        }\n                        localIdx++;\n                        beenVisitedFlag[i] = true;\n                    }\n                }\n                Mat myMeanMat;\n                reduce(tempData, myMeanMat, 0, CV_REDUCE_AVG);\n                myMeanMat.copyTo(myMean);\n\n                // compute the 2-norm of the difference-between-means vector\n                double meanNorm = RV_computeNorm(myOldMean, myMean);\n\n                if (meanNorm < stopThresh)\n                {\n                    int mergeWith = -1;\n                    for (int cn = 0; cn <= numClust; cn++)\n                    {\n                        vector<double> localClustCent(numDim);\n                        clustCent.row(cn).copyTo(localClustCent);\n                        double distToOther = RV_computeNorm(myMean, localClustCent);\n\n                        if (distToOther < bandWidth / 2)\n                        {\n                            mergeWith = cn;\n                            break;\n                        }\n                    }\n\n                    if (mergeWith >= 0)\n                    {\n                        Mat localMat = 0.5 * (myMeanMat + clustCent.row(mergeWith));\n                        localMat.copyTo(clustCent.row(mergeWith));\n                    }\n                    else\n                    {\n                        numClust++;\n                        clustCent.push_back(myMeanMat);\n                        if (tempData.rows > biggestClusterSize)\n                        {\n                            biggestClusterSize = tempData.rows;\n                            *maxClustPointer = *maxClustPointer + 1;\n                        }\n                        //myMeanMat.copyTo(clustCent.row(numClust));\n                    }\n                    break;\n                }\n            }\n\n            initPtInds.clear();\n            for (int i = 0; i < (int)beenVisitedFlag.size(); i++)\n            {\n                if (beenVisitedFlag[i] == false)\n                {\n                    initPtInds.push_back(i);\n                }\n            }\n            numInitPts = initPtInds.size();\n        }\n        return clustCent;\n    }\n\n    Mat RV_preprocessDepthFrame(Mat frame)\n    {\n        Mat output;\n        double minVal = 100;\n        double maxVal = -100;\n        Point minLoc, maxLoc;\n\n        for (int i = 0; i < frame.rows; i++)\n        {\n            for (int j = 0; j < frame.cols; j++)\n            {\n                if ((frame.at<double>(i, j) < minVal) & (frame.at<double>(i, j) != -100))\n                {\n                    minVal = frame.at<double>(i, j);\n                    minLoc.x = i;\n                    minLoc.y = j;\n                }\n\n                if (frame.at<double>(i, j) > maxVal)\n                {\n                    maxVal = frame.at<double>(i, j);\n                    maxLoc.x = i;\n                    maxLoc.y = j;\n                }\n            }\n        }\n\n        for (int i = 0; i < frame.rows; i++)\n        {\n            for (int j = 0; j < frame.cols; j++)\n            {\n                if (frame.at<double>(i, j) != -100)\n                {\n                    frame.at<double>(i, j) = 255 - (frame.at<double>(i, j) - minVal) / (maxVal - minVal) * 254;\n                }\n                else\n                {\n                    frame.at<double>(i, j) = 0;\n                }\n            }\n        }\n\n        transpose(frame, output);\n        return output;\n    }\n\n    void RV_featExtractionSingleFrame(Mat frame, paramList params, vector<double> *bgPerc, vector<Mat> &featData)\n    {\n        featData.clear();\n        featData.resize((unsigned int)params.noPatches);\n        for (int k = 0; k < params.noPatches; k++)\n        {\n            Mat dummyMat(1, (int)(params.d * params.noChannels), CV_64FC1);\n            dummyMat.setTo(0);\n            featData[k] = dummyMat;\n        }\n\n        int rows = frame.rows;\n        int cols = frame.cols;\n\n        vector<Mat> channels = RV_channelsExtractionSingleFrame(frame, params);\n        vector<Mat> channelsInt(params.noChannels);\n        vector<Mat> maskInt(params.noChannels);\n\n        for (int i = 0; i < params.noChannels; i++)\n        {\n            Mat localChannel = channels[i];\n            Mat localChannelInt(rows + 1, cols + 1, CV_64FC1);\n            localChannelInt.setTo(0);\n            integral(localChannel, localChannelInt);\n\n            Mat localMask(rows, cols, CV_32FC1);\n            localMask.setTo(0);\n            localChannel.convertTo(localChannel, CV_32FC1); // for some (probably) stupid reason,\n            // m***** f***** \"threshold\" function does not work with CV_64FC1...\n            threshold(localChannel, localMask, 0.001, 1, THRESH_BINARY);\n\n            Mat localMaskInt(rows + 1, cols + 1, CV_64FC1);\n            localMaskInt.setTo(0);\n            integral(localMask, localMaskInt);\n            channelsInt[i] = localChannelInt;\n            maskInt[i] = localMaskInt;\n        }\n\n        for (int k = 0; k < params.patchCoords.rows; k++)\n        {\n            Mat frameAux(frame, Rect((int)params.patchCoords.at<double>(k, 1),\n                                     (int)params.patchCoords.at<double>(k, 0),\n                                     (int)params.patchSize,\n                                     (int)params.patchSize));\n            double countBG = 0;\n            for (int i = 1; i < frameAux.rows; i++) // first row and column are not covered by the rectangular features\n                // so yes, index should start from 1\n            {\n                for (int j = 1; j < frameAux.cols; j++) // same here\n                {\n                    if (frameAux.at<double>(i, j) == 0)\n                    {\n                        countBG++;\n                    }\n                }\n            }\n            (*bgPerc)[k] = countBG / (params.patchSize - 1) / (params.patchSize - 1);\n\n            vector<Mat> localImageInt(params.noChannels);\n            vector<Mat> localMaskInt(params.noChannels);\n            for (int i = 0; i < params.noChannels; i++)\n            {\n                Mat dummyMat1(channelsInt[i], Rect((int)params.patchCoords.at<double>(k, 1) + 1,\n                                                   (int)params.patchCoords.at<double>(k, 0) + 1,\n                                                   (int)params.patchSize,\n                                                   (int)params.patchSize));\n                localImageInt[i] = dummyMat1;\n                Mat dummyMat2(maskInt[i], Rect((int)params.patchCoords.at<double>(k, 1) + 1,\n                                               (int)params.patchCoords.at<double>(k, 0) + 1,\n                                               (int)params.patchSize,\n                                               (int)params.patchSize));\n                localMaskInt[i] = dummyMat2;\n            }\n\n            Mat feats = RV_computeDiffFeats(localImageInt, localMaskInt, params.patchFeats[k]);\n            featData[k] = feats;\n        }\n        // return featData;\n    }\n\n    Mat RV_computeDiffFeats(vector<Mat> imageInt, vector<Mat> maskInt, Mat localFeat)\n    {\n        Mat feats(1, localFeat.rows * imageInt.size(), CV_64FC1);\n        feats.setTo(0);\n\n        //int numRectangles = params.patchFeats[0].cols / 4;\n        for (int i = 0; i < (int)imageInt.size(); i++)\n        {\n            Mat localImg = imageInt[i];\n            Mat localMask = maskInt[i];\n            for (int k = 0; k < localFeat.rows; k++)\n            {\n                //vector<int> v(localFeat.cols);\n                //localFeat.row(k).copyTo(v);\n\n                // keep in mind that the coordinates of the rectangles are obeying Matlab indexing, therefore adaptation is performed explicitly here\n\n                double a1 = localImg.at<double>(localFeat.at<uchar>(k, 3) - 1, localFeat.at<uchar>(k, 2) - 1) -\n                            localImg.at<double>(localFeat.at<uchar>(k, 3) - 1, localFeat.at<uchar>(k, 0) - 2) -\n                            localImg.at<double>(localFeat.at<uchar>(k, 1) - 2, localFeat.at<uchar>(k, 2) - 1) +\n                            localImg.at<double>(localFeat.at<uchar>(k, 1) - 2, localFeat.at<uchar>(k, 0) - 2);\n                double a2 = localImg.at<double>(localFeat.at<uchar>(k, 7) - 1, localFeat.at<uchar>(k, 6) - 1) -\n                            localImg.at<double>(localFeat.at<uchar>(k, 7) - 1, localFeat.at<uchar>(k, 4) - 2) -\n                            localImg.at<double>(localFeat.at<uchar>(k, 5) - 2, localFeat.at<uchar>(k, 6) - 1) +\n                            localImg.at<double>(localFeat.at<uchar>(k, 5) - 2, localFeat.at<uchar>(k, 4) - 2);\n                /*double a3 = localImg.at<double>(localFeat.at<uchar>(k, 11) - 1, localFeat.at<uchar>(k, 10) - 1) -\n                    localImg.at<double>(localFeat.at<uchar>(k, 11) - 1, localFeat.at<uchar>(k, 8) - 2) -\n                    localImg.at<double>(localFeat.at<uchar>(k, 9) - 2, localFeat.at<uchar>(k, 10) - 1) +\n                    localImg.at<double>(localFeat.at<uchar>(k, 9) - 2, localFeat.at<uchar>(k, 8) - 2);\n                double a4 = localImg.at<double>(localFeat.at<uchar>(k, 15) - 1, localFeat.at<uchar>(k, 14) - 1) -\n                    localImg.at<double>(localFeat.at<uchar>(k, 15) - 1, localFeat.at<uchar>(k, 12) - 2) -\n                    localImg.at<double>(localFeat.at<uchar>(k, 13) - 2, localFeat.at<uchar>(k, 14) - 1) +\n                    localImg.at<double>(localFeat.at<uchar>(k, 13) - 2, localFeat.at<uchar>(k, 12) - 2);*/\n\n                double z1 = localMask.at<double>(localFeat.at<uchar>(k, 3) - 1, localFeat.at<uchar>(k, 2) - 1) -\n                            localMask.at<double>(localFeat.at<uchar>(k, 3) - 1, localFeat.at<uchar>(k, 0) - 2) -\n                            localMask.at<double>(localFeat.at<uchar>(k, 1) - 2, localFeat.at<uchar>(k, 2) - 1) +\n                            localMask.at<double>(localFeat.at<uchar>(k, 1) - 2, localFeat.at<uchar>(k, 0) - 2);\n                double z2 = localMask.at<double>(localFeat.at<uchar>(k, 7) - 1, localFeat.at<uchar>(k, 6) - 1) -\n                            localMask.at<double>(localFeat.at<uchar>(k, 7) - 1, localFeat.at<uchar>(k, 4) - 2) -\n                            localMask.at<double>(localFeat.at<uchar>(k, 5) - 2, localFeat.at<uchar>(k, 6) - 1) +\n                            localMask.at<double>(localFeat.at<uchar>(k, 5) - 2, localFeat.at<uchar>(k, 4) - 2);\n                /*double z3 = localMask.at<double>(localFeat.at<uchar>(k, 11) - 1, localFeat.at<uchar>(k, 10) - 1) -\n                    localMask.at<double>(localFeat.at<uchar>(k, 11) - 1, localFeat.at<uchar>(k, 8) - 2) -\n                    localMask.at<double>(localFeat.at<uchar>(k, 9) - 2, localFeat.at<uchar>(k, 10) - 1) +\n                    localMask.at<double>(localFeat.at<uchar>(k, 9) - 2, localFeat.at<uchar>(k, 8) - 2);\n                double z4 = localMask.at<double>(localFeat.at<uchar>(k, 15) - 1, localFeat.at<uchar>(k, 14) - 1) -\n                    localMask.at<double>(localFeat.at<uchar>(k, 15) - 1, localFeat.at<uchar>(k, 12) - 2) -\n                    localMask.at<double>(localFeat.at<uchar>(k, 13) - 2, localFeat.at<uchar>(k, 14) - 1) +\n                    localMask.at<double>(localFeat.at<uchar>(k, 13) - 2, localFeat.at<uchar>(k, 12) - 2);*/\n\n                double mz1 = (z1 > 1) ? z1 : 1;\n                double mz2 = (z2 > 1) ? z2 : 1;\n                /*double mz3 = (z3 > 1) ? z3 : 1;\n                double mz4 = (z4 > 1) ? z4 : 1;*/\n\n                //double featVal = a1 / mz1 + a3 / mz3 - a2 / mz2 - a4 / mz4;\n                double featVal = a1 / mz1 - a2 / mz2;\n                feats.at<double>(0, i * localFeat.rows + k) = featVal;\n            }\n        }\n\n        return feats;\n    }\n\n    vector<Mat> RV_channelsExtractionSingleFrame(Mat frame, paramList params)\n    {\n        vector<Mat> output(params.noChannels);\n        for (int i = 0; i < params.noChannels; i++)\n        {\n            Mat buff(frame.rows, frame.cols, CV_64FC1);\n            buff.setTo(0);\n            output[i] = buff;\n        }\n\n        output[0] = frame;\n        Mat dx(frame.rows, frame.cols, CV_64FC1);\n        Mat dy(frame.rows, frame.cols, CV_64FC1);\n\n        Sobel(frame, dx, CV_64FC1, 0, 1);\n        Sobel(frame, dy, CV_64FC1, 1, 0);\n\n        Mat magnitudeAux(frame.rows, frame.cols, CV_64FC1);\n        Mat orientation(frame.rows, frame.cols, CV_64FC1);\n\n        for (int i = 0; i < frame.rows; i++)\n        {\n            for (int j = 0; j < frame.cols; j++)\n            {\n                magnitudeAux.at<double>(i, j) = sqrt(pow(dx.at<double>(i, j), 2) + pow(dy.at<double>(i, j), 2));\n                orientation.at<double>(i, j) = atan2(dx.at<double>(i, j), dy.at<double>(i, j));\n            }\n        }\n\n        Mat magnitude(frame.rows, frame.cols, CV_64FC1);\n        log(magnitudeAux + 1.0, magnitude); // switch to log scale to attenuate differences...\n\n        output[1] = magnitude;\n\n        /*namedWindow(\"frame\", CV_WINDOW_AUTOSIZE);\n        int flag = RV_imshow(\"frame\", frame);\n        namedWindow(\"magnitude\", CV_WINDOW_AUTOSIZE);\n        flag = RV_imshow(\"magnitude\", magnitude);\n        namedWindow(\"orientation\", CV_WINDOW_AUTOSIZE);\n        flag = RV_imshow(\"orientation\", orientation);*/\n\n        //setMouseCallback(\"image2\", onMouse, 0);\n\n        // compute gradient histograms for each of the 6 discrete orientation intervals\n        // if an orient falls inside a particular interval, we assign the pixel a value proportional\n        // to the gradient magnitude weighted by a peaked Gaussian whose mean is the center of the\n        // orientation interval and std = 1/(sqrt(2*pi))/5 (5 comes from the attempt of replicating Dollars toolbox)\n        // of course, pdf(mu) should be 1, so we compensate the small std by dividing the pdf by 5\n\n        // using Sobel gradient results in orients in the range (-pi;pi) therefore we split this interval in 6 complementary subintervals\n\n        double sigma = 1 / sqrt(2 * M_PI) / 5;\n        for (int k = 0; k < 6; k++)\n        {\n            double mu = -5 * M_PI / 6 + k * 2 * M_PI / 6;\n            Mat localChannel(frame.rows, frame.cols, CV_64FC1);\n            for (int i = 0; i < frame.rows; i++)\n            {\n                for (int j = 0; j < frame.cols; j++)\n                {\n                    double valuePdf = RV_gaussPdf(orientation.at<double>(i, j), mu, sigma) / 5;\n                    localChannel.at<double>(i, j) = magnitude.at<double>(i, j) * valuePdf;\n                }\n            }\n            output[k + 2] = localChannel;\n        }\n\n        // compute the LBP map of the original (grayscaled) depth image\n        Mat lbpMap = RV_lbp(frame);\n        Mat lbpMap64(lbpMap.rows, lbpMap.cols, CV_64FC1);\n        lbpMap.convertTo(lbpMap64, CV_64FC1);\n\n        for (int i = 1; i < frame.rows - 1; i++)\n        {\n            for (int j = 1; j < frame.cols - 1; j++)\n            {\n                output[8].at<double>(i, j) = lbpMap64.at<double>(i - 1, j - 1);\n            }\n        }\n        return output;\n    }\n\n    double RV_gaussPdf(double x, double mu, double sigma)\n    {\n        /*\n            x = double(int(x * 10)) / 10;\n            static map<double, double> pdfMap;\n            if (pdfMap.find(x) != pdfMap.end())\n            {\n                return pdfMap[x];\n            }*/\n        double aux = -pow(x - mu, 2) / (2 * pow(sigma, 2));\n        double out = 1 / (sqrt(2 * M_PI) * sigma) * exp(aux);\n        /*pdfMap[x] = out;*/\n        return out;\n\n    }\n\n    int RV_imshow(string windowName, Mat img)\n    {\n        double minVal, maxVal;\n        Mat draw(img.rows, img.cols, CV_8UC1);\n        minMaxLoc(img, &minVal, &maxVal);\n        img.convertTo(draw, CV_8UC1, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal));\n        imshow(windowName, draw);\n        return 1;\n    }\n\n    Mat RV_lbp(const Mat &src)\n    {\n        Mat dst;\n        dst = Mat::zeros(src.rows - 2, src.cols - 2, CV_8UC1);\n        for (int i = 1; i < src.rows - 1; i++) {\n            for (int j = 1; j < src.cols - 1; j++) {\n                uchar center = (uchar)src.at<double>(i, j);\n                unsigned char code = 0;\n                code |= ((uchar)src.at<double>(i - 1, j - 1) > center) << 7;\n                code |= ((uchar)src.at<double>(i - 1, j) > center) << 6;\n                code |= ((uchar)src.at<double>(i - 1, j + 1) > center) << 5;\n                code |= ((uchar)src.at<double>(i, j + 1) > center) << 4;\n                code |= ((uchar)src.at<double>(i + 1, j + 1) > center) << 3;\n                code |= ((uchar)src.at<double>(i + 1, j) > center) << 2;\n                code |= ((uchar)src.at<double>(i + 1, j - 1) > center) << 1;\n                code |= ((uchar)src.at<double>(i, j - 1) > center) << 0;\n                dst.at<uchar>(i - 1, j - 1) = code;\n            }\n        }\n        return 255 - dst;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/FacialExpressions/ferLocalFunctions.h",
    "content": "#ifndef FERLOCALFUNCTIONS_H\n#define FERLOCALFUNCTIONS_H\n\n#include <string>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <stdint.h>\n#include \"opencv2/core/core.hpp\"\n\nnamespace fer\n{\n\n# define M_PI           3.14159265358979323846  /* pi */\n\n    using namespace std;\n    using namespace cv;\n\n    struct paramList\n    {\n        int d;\n        int patchSize;\n        int stride;\n        int featSize;\n        int noPatches;\n        int noChannels;\n        int noTrees;\n        Mat patchCoords;\n        vector<Mat> patchFeats;\n    };\n\n    struct randomTree\n    {\n        vector<double> cutVar, cutValue, rightChild, leafVal;\n    };\n\n    struct headPoseInfo\n    {\n        double x, y, yaw, tilt;\n    };\n\n    vector<string> get_all_files_names_within_folder(string folder);\n\n    Mat RV_readCSVMat(string csvFrameName, int lin, int col);\n    void RV_readParamList(const string foldername, paramList *p);\n    vector<vector<randomTree>> RV_readAllForests(string forestDir, paramList params);\n    vector<double> RV_computeHist(vector<double> vec, int noBins);\n\n    vector<double> RV_testForest(Mat localData, vector<randomTree> &localForest, paramList params, int numberOfClasses);\n\n    Rect RV_getBoundingBox(const Mat &im3D, paramList p);\n    double RV_computeNorm(vector<double> vec1, vector<double> vec2);\n    double RV_computeMean(vector<double> vec);\n    Mat RV_preprocessDepthFrame(Mat frameIn);\n    void RV_featExtractionSingleFrame(Mat frame, paramList params, vector<double> *bgPerc, vector<Mat> &result);\n    vector<Mat> RV_channelsExtractionSingleFrame(Mat frame, paramList params);\n    Mat RV_computeDiffFeats(vector<Mat> imageInt, vector<Mat> maskInt, Mat localFeat);\n\n    double RV_gaussPdf(double x, double mu, double sigma);\n    int RV_imshow(string windowName, Mat img);\n\n    Mat RV_lbp(const Mat &src);\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/CylinderOptimizedFeatureCalculator.cpp",
    "content": "//#include \"CylinderOptimizedFeatureCalculator.h\"\n\nnamespace hpe\n{\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/CylinderOptimizedFeatureCalculator.h",
    "content": "#pragma once\n\n#ifndef CYLINDEROPTIMIZEDFEATURECALCULATOR_H\n#define CYLINDEROPTIMIZEDFEATURECALCULATOR_H\n\n#include <Features/Calculator/FeatureCalculatorBase.h>\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <pcl/visualization/pcl_visualizer.h>\n\n#include <boost/math/special_functions/fpclassify.hpp>\n\n#include <UI/ShowCloud.h>\n\nnamespace hpe\n{\n    enum FeatureType { Depth, RGB };\n\n    template <class PointType, class NormalType>\n    class CylinderOptimizedFeatureCalculator : public FeatureCalculatorBase <PointType, NormalType>\n    {\n        public:\n            typedef PointType TPoint;\n            typedef NormalType TNormal;\n            typedef std::shared_ptr<IPatchSampler<PointType>> Sampler;\n            typedef pcl::PointCloud<PointType> Cloud;\n            typedef pcl::PointCloud<NormalType> NormalCloud;\n            typedef pcl::search::KdTree<PointType> Search;\n\n            CylinderOptimizedFeatureCalculator(Sampler sampler, FeatureType featureType)\n                : FeatureCalculatorBase<PointType, NormalType>(sampler), m_visualize(false), m_featureType(featureType)\n            {\n            }\n\n            ~CylinderOptimizedFeatureCalculator(void)\n            {\n            }\n\n            void SetFeatureSize(cv::Size size)\n            {\n                m_featureSize = size;\n            }\n\n            cv::Mat GetFeatures(typename Cloud::Ptr &cloud)\n            {\n                m_result = CreateResult();\n                m_featureNumber = 0;\n                this->Calculate(cloud);\n                return m_result;\n            }\n\n        protected:\n            void PreCompute() {}\n\n            cv::Mat CreateResult()\n            {\n                if (m_featureType == FeatureType::Depth)\n                {\n                    return cv::Mat::ones(1, m_featureSize.width * m_featureSize.height, CV_32FC1) * -100;\n                }\n                else if (m_featureType == FeatureType::RGB)\n                {\n                    return cv::Mat::zeros(1, m_featureSize.width * m_featureSize.height, CV_8UC3);\n                }\n            }\n\n            void PreCompute(typename Cloud::Ptr cloud, pcl::ModelCoefficients &coefficients)\n            {\n                m_columns.clear();\n                m_rows.clear();\n\n                m_referencePoint.x = coefficients.values[0];\n                m_referencePoint.y = coefficients.values[1];\n                m_referencePoint.z = coefficients.values[2];\n\n                m_phiRange.first = coefficients.values[3];\n                m_phiRange.second = coefficients.values[4];\n                m_phiSteps = (int)coefficients.values[5];\n\n                m_yRange.first = coefficients.values[6];\n                m_yRange.second = coefficients.values[7];\n                m_ySteps = (int)coefficients.values[8];\n\n                //std::vector<std::vector<int>> indices;\n                m_columns.resize(m_phiSteps);\n                m_rows.resize(m_ySteps);\n\n                for (int i = 0; i < cloud->points.size(); i += 1)\n                {\n                    PointType pt = cloud->points[i];\n                    if (boost::math::isnan(pt.x) || boost::math::isnan(pt.y) || boost::math::isnan(pt.z))\n                    {\n                        continue;\n                    }\n                    int column = GetPointColumn(pt);\n                    if (column != -1)\n                    {\n                        if (column > 0) m_columns[column - 1].push_back(i);\n                        m_columns[column].push_back(i);\n                        if (column < m_phiSteps - 1) m_columns[column + 1].push_back(i);\n                    }\n                    int row = GetPointRow(pt);\n                    if (row != -1)\n                    {\n                        if (row > 0) m_rows[row - 1].push_back(i);\n                        m_rows[row].push_back(i);\n                        if (row < m_ySteps - 1) m_rows[row + 1].push_back(i);\n                    }\n                }\n\n                for (int i = 0; i < m_phiSteps; i += 1)\n                {\n                    std::sort(m_columns[i].begin(), m_columns[i].end());\n                }\n                for (int i = 0; i < m_ySteps; i += 1)\n                {\n                    std::sort(m_rows[i].begin(), m_rows[i].end());\n                }\n\n                if (m_visualize)\n                {\n                    pcl::visualization::PCLVisualizer v(\"Segments\");\n                    v.addPointCloud<PointType>(cloud);\n                    v.addPointCloud<PointType>(cloud, \"pts\");\n                    v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"pts\");\n                    for (int i = 0; i < m_phiSteps; i += 1)\n                    {\n                        typename Cloud::Ptr subcloud(new Cloud);\n                        pcl::copyPointCloud(*cloud, m_columns[i], *subcloud);\n                        v.updatePointCloud<PointType>(subcloud, \"pts\");\n                        v.spinOnce(100);\n                    }\n                    v.removeAllPointClouds();\n                }\n\n                /*m_trees.clear();\n                m_trees.resize(m_phiSteps);\n                for (int i = 0; i < m_phiSteps; i += 1)\n                {\n                    Cloud::Ptr subcloud(new Cloud);\n                    pcl::copyPointCloud(*cloud, indices[i], *subcloud);\n                    m_trees[i].setInputCloud(subcloud);\n                }*/\n\n            }\n            void ComputeFeature(typename Cloud::Ptr cloud, PointType &point, NormalType &normal)\n            {\n                int column = GetPointColumn(point);\n                if (column == -1)\n                {\n                    throw HPEException(\"CylinderOptimizedFeatureCalculator::ComputeFeature - internal structures corrupted\");\n                }\n                int row = GetPointRow(point);\n                if (column == -1)\n                {\n                    throw HPEException(\"CylinderOptimizedFeatureCalculator::ComputeFeature - internal structures corrupted\");\n                }\n\n                std::vector<int> indices = GetIntersection(column, row);\n\n                if (m_visualize)\n                {\n                    /*Cloud::Ptr subcloud(new Cloud);\n                    pcl::copyPointCloud(*treeCloud, indices, *subcloud);\n                    subcloud->push_back(point);\n                    pcl::visualization::PCLVisualizer v(\"sub\");\n                    v.addPointCloud<PointType>(cloud);\n                    v.addPointCloud<PointType>(subcloud, \"1\");\n                    v.addPointCloud<PointType>(treeCloud, \"2\");\n                    v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"1\");\n                    v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, \"1\");\n                    v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, \"2\");\n                    v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, \"2\");\n                    v.spin();*/\n                }\n\n                if (indices.size() > 0)\n                {\n                    std::vector<float> distances;\n                    for (int i = 0; i < indices.size(); i += 1)\n                    {\n                        distances.push_back((point.getVector3fMap() - cloud->points[indices[i]].getVector3fMap()).norm());\n                    }\n\n                    PointType interpolated;\n                    if (indices.size() == 1)\n                    {\n                        interpolated = cloud->points[indices[0]];\n                    }\n                    else\n                    {\n                        interpolated = Interpolate(cloud, indices, distances);\n                    }\n\n                    if (m_featureType == FeatureType::Depth)\n                    {\n                        float depth = (interpolated.getVector3fMap() - point.getVector3fMap()).norm();\n\n                        float pointProjectionRadius = (std::pow(m_referencePoint.x - point.x, 2) + std::pow(m_referencePoint.z - point.z, 2));\n                        float interpolatedProjectionRadius = (std::pow(m_referencePoint.x - interpolated.x, 2) + std::pow(m_referencePoint.z - interpolated.z, 2));\n\n                        if (pointProjectionRadius < interpolatedProjectionRadius)\n                        {\n                            depth = -depth;\n                        }\n\n                        m_result.at<float>(m_featureNumber) = depth;\n                    }\n                    else if (m_featureType == FeatureType::RGB)\n                    {\n                        cv::Vec3b color = cv::Vec3b(interpolated.b, interpolated.g, interpolated.r);\n                        m_result.at<cv::Vec3b>(m_featureNumber) = color;\n                    }\n                }\n\n                m_featureNumber += 1;\n            }\n        private:\n\n            int GetPointColumn(PointType pt)\n            {\n                float phiStep = (m_phiRange.second - m_phiRange.first) / m_phiSteps;\n                float angle = std::atan2(pt.z - m_referencePoint.z, pt.x - m_referencePoint.x);\n                int segment = (angle - m_phiRange.first - phiStep / 2) / phiStep;\n                if (segment > m_phiSteps - 1 || segment < 0)\n                {\n                    return -1;\n                }\n                return segment;\n            }\n\n            int GetPointRow(PointType pt)\n            {\n                float yStep = (m_yRange.second - m_yRange.first) / m_ySteps;\n                int row = (pt.y - m_yRange.first - yStep / 2) / yStep;\n                if (row > m_ySteps - 1 || row < 0)\n                {\n                    return -1;\n                }\n                return row;\n            }\n\n            std::vector<int> GetIntersection(int column, int row)\n            {\n                std::vector<int> result;\n\n                std::set_intersection(m_columns[column].begin(), m_columns[column].end(), m_rows[row].begin(), m_rows[row].end(), std::back_inserter(result));\n\n                return result;\n            }\n\n            PointType Interpolate(typename Cloud::Ptr &cloud, std::vector<int> &indices, std::vector<float> &distances)\n            {\n                float totalDistance = 0;\n                for (int i = 0; i < distances.size(); i += 1)\n                {\n                    totalDistance += distances[i];\n                }\n\n                PointType result;\n                int n = indices.size();\n\n                for (int i = 0; i < n; i++)\n                {\n                    PointType p = cloud->at(indices[i]);\n                    float currentDistance = distances[i];\n                    float c = (totalDistance - currentDistance) / ((n - 1) * totalDistance);\n                    result.x += c * p.x;\n                    result.y += c * p.y;\n                    result.z += c * p.z;\n                    result.r += c * (float)p.r;\n                    result.g += c * (float)p.g;\n                    result.b += c * (float)p.b;\n                }\n                return result;\n            }\n\n            std::vector<std::vector<int>> m_columns;\n            std::vector<std::vector<int>> m_rows;\n\n            std::vector<pcl::search::KdTree<PointType>> m_trees;\n\n            pcl::PointXYZ m_referencePoint;\n\n            std::pair<float, float> m_phiRange;\n            int m_phiSteps;\n\n            std::pair<float, float> m_yRange;\n            int m_ySteps;\n\n            bool m_visualize;\n            cv::Mat m_result;\n            cv::Size m_featureSize;\n            int m_featureNumber;\n\n            FeatureType m_featureType;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/FeatureCalculatorBase.h",
    "content": "#pragma once\n\n#ifndef FEATURE_CALCULATOR_BASE_H\n#define FEATURE_CALCULATOR_BASE_H\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <pcl/search/kdtree.h>\n#ifndef EIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET\n#define EIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET\n#endif\n#include <pcl/features/principal_curvatures.h>\n#include <pcl/features/normal_3d.h>\n\n#include <Features/Sampler/IPatchSampler.h>\n#include <Features/IFeature.h>\n#include <Filter/Filters.h>\n#include <Landmarks.h>\n\nnamespace hpe\n{\n    template<typename PointType, typename NormalType>\n    class FeatureCalculatorBase\n    {\n        public:\n            void SetLandmarks(const typename Common<PointType>::Landmarks &landmarks)\n            {\n                m_landmarks = landmarks;\n            }\n        protected:\n            typedef PointType TPoint;\n            typedef NormalType TNormal;\n            typedef std::shared_ptr<IPatchSampler<PointType>> Sampler;\n            typedef pcl::PointCloud<PointType> Cloud;\n            typedef pcl::PointCloud<NormalType> NormalCloud;\n            typedef pcl::search::KdTree<PointType> Search;\n\n            FeatureCalculatorBase(Sampler sampler)\n                : m_sampler(sampler)\n            {\n\n            }\n\n            void Calculate(typename Cloud::Ptr &cloud)\n            {\n                pcl::ModelCoefficients coefficients;\n                if (m_landmarks.size() != 0)\n                {\n                    m_featurePoints = m_sampler->SampleCloud(cloud, m_landmarks, coefficients);\n                    m_landmarks.clear();\n                }\n                else\n                {\n                    m_featurePoints = m_sampler->SampleCloud(cloud, coefficients);\n                }\n\n                auto normalCloud = m_featurePoints;\n\n                int numberOfSamples = m_featurePoints->size();\n                m_featureNumber = 0;\n\n                if (coefficients.values.size() == 0)\n                {\n                    PreCompute();\n                }\n                else\n                {\n                    PreCompute(cloud, coefficients);\n                }\n\n                for (int i = 0; i < numberOfSamples; i++)\n                {\n                    auto gridPoint = m_featurePoints->at(i);\n                    PointType point;\n                    NormalType normal;\n\n                    if ((boost::math::isnan(gridPoint.x) || boost::math::isnan(gridPoint.y) || boost::math::isnan(gridPoint.z)) == false)\n                    {\n                        // TODO make both usages:\n                        int samplePointIndex = i; //GetReferencePoint(gridPoint);\n                        point = m_featurePoints->at(samplePointIndex);\n                        normal = normalCloud->at(samplePointIndex);\n                    }\n\n                    ComputeFeature(cloud, point, normal);\n\n                    m_featureNumber++;\n                }\n\n                m_featureNumber = 0;\n            }\n\n            int GetReferencePoint(PointType &point)\n            {\n                std::vector<int> indices;\n                std::vector<float> distances;\n\n                m_search->nearestKSearch(point, 1, indices, distances);\n\n                return indices[0];\n            }\n\n            int GetFeatureNumber()\n            {\n                return m_featureNumber;\n            }\n\n            int GetFeatureCount()\n            {\n                return m_featurePoints->size();\n            }\n\n            virtual void ComputeFeature(typename Cloud::Ptr cloud, PointType &point, NormalType &normal) = 0;\n            virtual void PreCompute() = 0;\n            virtual void PreCompute(typename Cloud::Ptr cloud, pcl::ModelCoefficients &coefficients) {}\n\n            typename Search::Ptr m_search;\n            Sampler m_sampler;\n\n        private:\n            int m_featureNumber;\n            typename Common<PointType>::Landmarks m_landmarks;\n            typename Cloud::Ptr m_featurePoints;\n    };\n\n}\n\n#endif //FEATURE_CALCULATOR_BASE_H\n\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/IFeatureCalculator.h",
    "content": "#pragma once\n\n#ifndef I_FEATURE_CALCULATOR_H\n#define I_FEATURE_CALCULATOR_H\n\n#include \"Interface/IInterface.h\"\n\nnamespace hpe\n{\n    template <typename PointType>\n    class IFeatureCalculator : public IInterface\n    {\n        public:\n            virtual cv::Mat GetFeatures(typename pcl::PointCloud<PointType>::Ptr &cloud) = 0;\n    };\n}\n\n\n#endif //I_FEATURE_CALCULATOR_H"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/IFeature.h",
    "content": "#pragma once\n\n#ifndef I_FEATURE_H\n#define I_FEATURE_H\n\n#include <pcl/point_cloud.h>\n#include <opencv2/opencv.hpp>\n\n#include \"Interface/IInterface.h\"\n\n\nnamespace hpe\n{\n    template <typename PointType, typename NormalType>\n    class IFeature : public IInterface\n    {\n        public:\n            virtual cv::Mat Compute(typename pcl::PointCloud<PointType>::Ptr &cloud,\n                                    PointType &point, NormalType &normal) = 0;\n\n            virtual cv::Mat EmptyFeature() = 0;\n    };\n}\n\n#endif //I_FEATURE_H"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/CylinderSampler.cpp",
    "content": "#define _CRT_SECURE_NO_WARNINGS\n\n#include <opencv2\\opencv.hpp>\n\n#include \"CylinderSampler.h\"\n\n#include <Exception/HPEException.h>\n#include <pcl/common/common.h>\n\n#include <boost/assign/std/vector.hpp>\n\n#include <UI/ShowCloud.h>\n\n#include <pcl/search/kdtree.h>\n#include <pcl/features/normal_3d.h>\n#include <pcl/io/ply_io.h>\n#include <pcl/io/vtk_io.h>\n\nusing namespace boost::assign;\n\nnamespace hpe\n{\n    CylinderSampler::CylinderSampler(Range phiRange, Range zRange, int topRows, int bottomRows, int sampleColumns)\n        : m_phiRange(phiRange), m_zRange(zRange), m_topRows(topRows),\n          m_bottomRows(bottomRows), m_visualize(false), m_sampleColumns(sampleColumns),\n          m_supportOptimizedSampling(false)\n    {\n        //m_rightEyeIndices += 0, 4;\n        //m_leftEyeIndices += 8, 12;\n        //m_rightEyeIndices += 0;\n        //m_rightEyeIndices += 0, 1, 2, 3, 4, 5, 6, 7;\n        //m_leftEyeIndices += 1;\n        //m_leftEyeIndices += 8, 9, 10, 11, 12, 13, 14, 15;\n        m_leftEyeIndices += 0;\n        m_rightEyeIndices += 1;\n    }\n\n    CylinderSampler::~CylinderSampler(void)\n    {\n    }\n\n    pcl::PointCloud<CylinderSampler::PointType>::Ptr CylinderSampler::SampleCloud(pcl::PointCloud<PointType>::Ptr &cloud)\n    {\n        CloudType::Ptr res(new CloudType);\n        return res;\n    }\n\n    pcl::PointCloud<CylinderSampler::PointType>::Ptr CylinderSampler::SampleCloud(pcl::PointCloud<PointType>::Ptr &cloud, const Common<PointType>::Landmarks &landmarks, pcl::ModelCoefficients &coefficients)\n    {\n        CloudType::Ptr res(new CloudType);\n\n        PointType leftEye = MeanPoint(landmarks, m_leftEyeIndices);\n        PointType rightEye = MeanPoint(landmarks, m_rightEyeIndices);\n\n        Eigen::Vector3f betweenEyeVector = leftEye.getArray3fMap() - rightEye.getArray3fMap();\n        float eyeDistance = betweenEyeVector.norm();\n\n        pcl::ModelCoefficients cylinder = ComputeCylinder(rightEye, leftEye);\n\n        Eigen::Vector3f direction;\n        direction << 0, 0, 1; // Works only when the cloud is oriented along OZ\n        Range phiRange(CV_PI * m_phiRange.first, CV_PI * m_phiRange.second);\n\n        float eyeY = cylinder.values[1];\n\n        float maxY = eyeY + m_zRange.second * eyeDistance;\n        float minY = eyeY - m_zRange.first * eyeDistance;\n\n        int totalRows = m_bottomRows + m_topRows;\n        float stepSize = (maxY - minY) / totalRows;\n\n        std::vector<float> samplingYs;\n        float samplingYStart = eyeY + (m_topRows - 1) * stepSize;\n        for (int i = 0; i < totalRows; i++)\n        {\n            samplingYs.push_back(samplingYStart);\n            samplingYStart -= stepSize;\n        }\n\n        CloudType::Ptr samplingCylinderTop = GenerateSamplingCloud(cylinder, direction, phiRange, m_sampleColumns, samplingYs);\n\n        coefficients.values.clear();\n\n        if (m_supportOptimizedSampling)\n        {\n            coefficients.values.push_back(cylinder.values[0]);\n            coefficients.values.push_back(cylinder.values[1]);\n            coefficients.values.push_back(cylinder.values[2]);\n\n            coefficients.values.push_back(phiRange.first);\n            coefficients.values.push_back(phiRange.second);\n            coefficients.values.push_back(m_sampleColumns);\n\n            coefficients.values.push_back(eyeY + (m_topRows - 1) * stepSize);\n            coefficients.values.push_back(eyeY - m_bottomRows * stepSize);\n            coefficients.values.push_back(totalRows);\n        }\n\n        if (m_visualize)\n        {\n            std::cout << samplingCylinderTop->size() << std::endl;\n\n            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr rgbCloud(new pcl::PointCloud<pcl::PointXYZRGBA>);\n            pcl::copyPointCloud(*cloud, *rgbCloud);\n\n            CloudType::Ptr eyes(new CloudType);\n            eyes->push_back(leftEye);\n            eyes->push_back(rightEye);\n\n            std::vector<float> eyeYs;\n            eyeYs.push_back(eyeY);\n            auto eyeLine = GenerateSamplingCloud(cylinder, direction, phiRange, m_sampleColumns, eyeYs);\n\n            std::vector<pcl::visualization::Camera> cameras;\n\n            pcl::visualization::PCLVisualizer vEyes(\"Eyes\");\n            vEyes.addPointCloud(rgbCloud, \"c\");\n            vEyes.addPointCloud<PointType>(eyes, \"e\");\n            vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"e\");\n            vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, \"e\");\n            vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, \"c\");\n            vEyes.setBackgroundColor(1, 1, 1);\n\n            vEyes.spin();\n\n            vEyes.getCameras(cameras);\n\n            pcl::visualization::PCLVisualizer v(\"\");\n            v.addPointCloud(rgbCloud, \"c\");\n            v.addPointCloud<PointType>(samplingCylinderTop, \"cyl\");\n            v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, \"cyl\");\n            v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 1, 1, \"cyl2\");\n\n            v.addCoordinateSystem();\n            v.addPointCloud<PointType>(eyeLine, \"EyeLine\");\n            v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"EyeLine\");\n            v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, \"EyeLine\");\n            v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, \"c\");\n            v.setBackgroundColor(1, 1, 1);\n\n            v.setCameraParameters(cameras[0]);\n            v.spin();\n\n        }\n\n        return samplingCylinderTop;\n    }\n\n    pcl::ModelCoefficients CylinderSampler::ComputeCylinder(PointType p1, PointType p2)\n    {\n        pcl::ModelCoefficients result;\n\n        Eigen::Vector3f distanceVector = p1.getArray3fMap() - p2.getArray3fMap();\n        float distance = distanceVector.norm();\n\n        PointType middlePoint;\n        middlePoint.getArray3fMap() = p1.getVector3fMap() - distanceVector / 2;\n\n        float r = distance * 1.2;\n        float rsq = r * r;\n        float offset = std::sqrt(distance * distance - (distance / 2) * (distance / 2));\n\n        float a = distance / 2;\n        float hsq = rsq - a * a;\n\n        Eigen::Vector3f P2 = p1.getArray3fMap() + a * (distance / 2);\n\n        float x = P2[0] + sqrt(hsq) * (p2.y - p1.y) / distance;\n        float y = P2[1] - sqrt(hsq) * (p2.x - p1.x) / distance;\n\n        middlePoint.z -= offset;\n\n        result.values.resize(7);\n        result.values[0] = middlePoint.x;\n        result.values[1] = middlePoint.y;\n        result.values[2] = middlePoint.z;\n\n        result.values[3] = 0;\n        result.values[4] = 1;\n        result.values[5] = 0;\n\n        result.values[6] = r;\n\n        return result;\n    }\n\n    CylinderSampler::PointType CylinderSampler::MeanPoint(const Common<PointType>::Landmarks &landmarks, const std::vector<int> &indices)\n    {\n        PointType res;\n        Eigen::Vector3f result;\n        result << 0, 0, 0;\n        for (int i = 0; i < indices.size(); i++)\n        {\n            Eigen::Vector3f p = landmarks[indices[i]].point.getArray3fMap();\n            result += p;\n        }\n        result /= indices.size();\n        res.getArray3fMap() = result;\n        return res;\n    }\n\n    CylinderSampler::CloudType::Ptr CylinderSampler::GenerateSamplingCloud(pcl::ModelCoefficients cylinder, Eigen::Vector3f direction, Range phiRange, int phiSteps, std::vector<float> samplingYs)\n    {\n        float radius = cylinder.values[6];\n        float phi = phiRange.first;\n        float phiStep = (phiRange.second - phiRange.first) / phiSteps;\n        CloudType::Ptr result(new CloudType);\n        for (int currentPhi = 0; currentPhi < phiSteps; ++currentPhi)\n        {\n            PointType p;\n            p.x = radius * std::cos(phi);\n            p.z = radius * std::sin(phi);\n\n            Eigen::Vector3f normalVector;\n            normalVector << p.x, p.y, p.z;\n            normalVector /= normalVector.norm();\n\n            p.normal_x = -normalVector[0];\n            p.normal_y = -normalVector[1];\n            p.normal_z = -normalVector[2];\n\n            for (int currentY = 0; currentY < samplingYs.size(); ++currentY)\n            {\n                PointType p2 = p;\n                p2.y = samplingYs[currentY];\n                result->push_back(p2);\n            }\n\n            phi += phiStep;\n        }\n\n        Eigen::Quaternionf rotation;\n        Eigen::Vector3f sourceVector;\n        sourceVector << 0, 0, 1;\n        rotation.setFromTwoVectors(sourceVector, direction);\n\n\n        Eigen::Matrix4f transofm = Eigen::Matrix4f::Identity();\n        Eigen::Vector3f shift;\n        shift << cylinder.values[0], 0, cylinder.values[2];\n        transofm.block<3, 1>(0, 3) = shift;\n        transofm.block<3, 3>(0, 0) = rotation.toRotationMatrix();\n\n        pcl::transformPointCloud(*result, *result, transofm);\n\n        PointType p13;\n        p13.getArray3fMap() << cylinder.values[0], 0, cylinder.values[2];\n\n        return result;\n    }\n\n    Eigen::Vector3f CylinderSampler::GetDirection(pcl::PointCloud<PointType>::Ptr &cloud, const Common<PointType>::Landmarks &landmarks)\n    {\n        pcl::NormalEstimation<PointType, pcl::Normal> ne;\n        ne.setInputCloud(cloud);\n\n        pcl::search::KdTree<PointType>::Ptr tree(new pcl::search::KdTree<PointType> ());\n        ne.setSearchMethod(tree);\n\n        pcl::PointCloud<pcl::Normal>::Ptr cloud_normals(new pcl::PointCloud<pcl::Normal>);\n\n        ne.setRadiusSearch(0.3);\n\n        ne.compute(*cloud_normals);\n\n        pcl::Normal normal = cloud_normals->points[landmarks[m_noseTipIndex].index];\n\n        Eigen::Vector3f direction;\n        direction << normal.normal_x, normal.normal_y, normal.normal_z;\n        return direction;\n    }\n\n    void CylinderSampler::SetVisualize(bool visualizeSampler)\n    {\n        m_visualize = visualizeSampler;\n    }\n\n    void CylinderSampler::SetEyeIndices(std::vector<int> &leftEyeIndices, std::vector<int> &rightEyeIndices)\n    {\n        m_leftEyeIndices = leftEyeIndices;\n        m_rightEyeIndices = rightEyeIndices;\n    }\n\n    void CylinderSampler::SetSupportOptimizedSampling(bool support)\n    {\n        m_supportOptimizedSampling = support;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/CylinderSampler.h",
    "content": "#pragma once\n\n#include <Features/Sampler/IPatchSampler.h>\n\n#include <pcl/ModelCoefficients.h>\n\n#ifndef CYLINDERSAMPLER_H\n#define CYLINDERSAMPLER_H\n\nnamespace hpe\n{\n    class CylinderSampler : public IPatchSampler<pcl::PointXYZRGBNormal>\n    {\n        public:\n            typedef std::pair<float, float> Range;\n            typedef pcl::PointXYZRGBNormal PointType;\n            typedef pcl::PointCloud<PointType> CloudType;\n\n            CylinderSampler(Range phiRange, Range zRange, int topRows, int bottomRows, int sampleColumns);\n            ~CylinderSampler(void);\n\n            pcl::PointCloud<PointType>::Ptr SampleCloud(pcl::PointCloud<PointType>::Ptr &cloud);\n            pcl::PointCloud<PointType>::Ptr SampleCloud(pcl::PointCloud<PointType>::Ptr &cloud, const Common<PointType>::Landmarks &landmarks, pcl::ModelCoefficients &coefficients);\n\n            void SetVisualize(bool visualizeSampler);\n            void SetEyeIndices(std::vector<int> &leftEyeIndices, std::vector<int> &rightEyeIndices);\n            void SetSupportOptimizedSampling(bool support);\n\n\n        private:\n            pcl::ModelCoefficients ComputeCylinder(PointType p1, PointType p2);\n            PointType MeanPoint(const Common<PointType>::Landmarks &landmarks, const std::vector<int> &indices);\n            CloudType::Ptr GenerateSamplingCloud(pcl::ModelCoefficients cylinder, Eigen::Vector3f direction, Range phiRange, int phiSteps, std::vector<float> samplingYs);\n            Eigen::Vector3f GetDirection(pcl::PointCloud<PointType>::Ptr &cloud, const Common<PointType>::Landmarks &landmarks);\n            std::vector<int> m_leftEyeIndices;\n            std::vector<int> m_rightEyeIndices;\n            int m_noseTipIndex;\n\n            bool m_visualize;\n            bool m_supportOptimizedSampling;\n\n            Range m_phiRange;\n            Range m_zRange;\n            int m_topRows;\n            int m_bottomRows;\n            int m_sampleColumns;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/IPatchSampler.h",
    "content": "#pragma once\n#pragma once\n\n#ifndef I_PATCH_SAMPLER\n#define I_PATCH_SAMPLER\n\n#include <pcl/point_cloud.h>\n#include <pcl/ModelCoefficients.h>\n#include <Interface/IInterface.h>\n#include <Landmarks.h>\n\nnamespace hpe\n{\n    template <typename PointType>\n    class IPatchSampler : public IInterface\n    {\n        public:\n            virtual typename pcl::PointCloud<PointType>::Ptr SampleCloud(typename pcl::PointCloud<PointType>::Ptr &cloud) = 0;\n            virtual typename pcl::PointCloud<PointType>::Ptr SampleCloud(typename pcl::PointCloud<PointType>::Ptr &cloud, const typename Common<PointType>::Landmarks &landmarks)\n            {\n                return SampleCloud(cloud);\n            }\n            virtual typename pcl::PointCloud<PointType>::Ptr SampleCloud(typename pcl::PointCloud<PointType>::Ptr &cloud, pcl::ModelCoefficients &coefficients)\n            {\n                coefficients.values.clear();\n                return SampleCloud(cloud);\n            }\n            virtual typename pcl::PointCloud<PointType>::Ptr SampleCloud(typename pcl::PointCloud<PointType>::Ptr &cloud, const typename Common<PointType>::Landmarks &landmarks, pcl::ModelCoefficients &coefficients)\n            {\n                return SampleCloud(cloud, coefficients);\n            }\n    };\n}\n\n#endif //I_PATCH_SAMPLER"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/BoxFilter.h",
    "content": "#ifndef BOXFILTER_H\n#define BOXFILTER_H\n\n#include \"Filter/ICloudFilter.h\"\n#include \"Filter/Filters.h\"\n\nnamespace hpe\n{\n    /**\n     \\class\tBoxFilter\n    \n     \\brief\tA filter that cuts a box inside a point cloud.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n    \n     \\tparam\tPointType\tPoint type of the cloud.\n     */\n\n    template<typename PointType>\n    class BoxFilter : public ICloudFilter<PointType>\n    {\n        public:\n            typedef std::shared_ptr<BoxFilter> Ptr;\n\n            //TODO consider using pcl::BoxClipper3D here (maybe)\n            BoxFilter(float xFrom, float xTo, float yFrom, float yTo, float zFrom, float zTo)\n                : m_xFrom(xFrom), m_xTo(xTo), m_yFrom(yFrom), m_yTo(yTo), m_zFrom(zFrom), m_zTo(zTo)\n            {\n            }\n\n            typename pcl::PointCloud<PointType>::Ptr Filter(typename pcl::PointCloud<PointType>::Ptr &cloud)\n            {\n                auto x = std::minmax(m_xFrom, m_xTo);\n                auto c1 = PassThrough<PointType>(cloud, x.first, x.second, \"x\");\n\n                auto y = std::minmax(m_yFrom, m_yTo);\n                auto c2 = PassThrough<PointType>(c1, y.first, y.second, \"y\");\n\n                auto z = std::minmax(m_zFrom, m_zTo);\n                auto c3 = PassThrough<PointType>(c2, z.first, z.second, \"z\");\n\n                return c3;\n            }\n\n        private:\n            float m_xFrom;\n            float m_xTo;\n\n            float m_yFrom;\n            float m_yTo;\n\n            float m_zFrom;\n            float m_zTo;\n    };\n}\n\n#endif // BOXFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/CenteredBoxFilter.h",
    "content": "#ifndef CENTEREDBOXFILTER_H\n#define CENTEREDBOXFILTER_H\n\n#include \"Filter/ICenterCloudFilter.h\"\n#include \"Filter/Filters.h\"\n\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n    template <typename PointT>\n    class CenteredBoxFilter : public ICenterCloudFilter<PointT>\n    {\n        public:\n            typedef pcl::PointCloud<PointT> Cloud;\n\n            CenteredBoxFilter(float sizeX, float sizeY, float sizeZ)\n                : m_sizeX(sizeX / 2), m_sizeY(sizeY / 2), m_sizeZ(sizeZ / 2)\n            {\n\n            }\n\n            typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud, float x, float y, float z)\n            {\n                float xFrom = x - m_sizeX;\n                float xTo = x + m_sizeX;\n                auto result = PassThrough<PointT>(cloud, xFrom, xTo, \"x\", cloud->isOrganized());\n\n                float yFrom = y - m_sizeY;\n                float yTo = y + m_sizeY;\n                result = PassThrough<PointT>(result, yFrom, yTo, \"y\", cloud->isOrganized());\n\n                float zFrom = z - m_sizeZ;\n                float zTo = z + m_sizeZ;\n                result = PassThrough<PointT>(result, zFrom, zTo, \"z\", cloud->isOrganized());\n\n                return result;\n            }\n\n\n        private:\n            float m_sizeX;\n            float m_sizeY;\n            float m_sizeZ;\n    };\n}\n\n#endif // CENTEREDBOXFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/FilteringQueue.h",
    "content": "#pragma once\n\n#ifndef FILTERINGQUEUE_H\n#define FILTERINGQUEUE_H\n\n#include <pcl/point_cloud.h>\n#include <pcl/common/common.h>\n#include <pcl/point_types.h>\n#include <pcl/io/io.h>\n\n#include \"Filter/ICloudFilter.h\"\n#include \"Filter/FunctorFilter.h\"\n#include <vector>\n\n\nnamespace hpe\n{\n\t/**\n\t \\class\tFilteringQueue\n\t\n\t \\brief\tA queue of ICloudFilter objects that has ICloudFilter interface.\n\t\n\t \\author\tSergey\n\t \\date\t8/11/2015\n\t\n\t \\tparam\tPointType\tType of the point type.\n\t */\n\n\ttemplate<typename PointType>\n    class FilteringQueue : public ICloudFilter<PointType>\n    {\n        public:\n            typedef std::shared_ptr<FilteringQueue> Ptr;\n            typedef std::shared_ptr<ICloudFilter<PointType>> InputFilterPtr;\n\n            FilteringQueue() {}\n\n            void AddFilter(std::shared_ptr<ICloudFilter<PointType>> filter)\n            {\n                m_filters.push_back(filter);\n            }\n\n            void AddFilterFunctor(typename FunctorFilter<PointType>::Functor functor)\n            {\n                typename FunctorFilter<PointType>::Ptr filter(new FunctorFilter<PointType>(functor));\n                m_filters.push_back(filter);\n            }\n\n            typename pcl::PointCloud<PointType>::Ptr Filter(typename pcl::PointCloud<PointType>::Ptr &cloud)\n            {\n                typename pcl::PointCloud<PointType>::Ptr result(new pcl::PointCloud<PointType>);\n                pcl::copyPointCloud(*cloud, *result);\n                for (auto it = m_filters.begin(); it != m_filters.end(); ++it)\n                {\n                    result = (*it)->Filter(result);\n                }\n\n                return result;\n            }\n\n        private:\n            std::vector<typename std::shared_ptr<ICloudFilter<PointType>>> m_filters;\n    };\n}\n\n#endif // FILTERINGQUEUE_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/Filters.h",
    "content": "#ifndef FILTERS_H\n#define FILTERS_H\n\n#include <pcl/point_cloud.h>\n\n#include <pcl/filters/voxel_grid.h>\n#include <pcl/filters/passthrough.h>\n#include <pcl/filters/statistical_outlier_removal.h>\n#include <pcl/features/normal_3d.h>\n\n#include <pcl/surface/mls.h>\n#include <Exception/HPEException.h>\n\n#include \"UI/ShowCloud.h\"\n\nnamespace hpe\n{\n    template<typename PointT>\n    typename pcl::PointCloud<PointT>::Ptr SortPoints(typename pcl::PointCloud<PointT>::Ptr cloud/*, std::function<PointT(PointT &, PointT &)> comparer*/)\n    {\n        typename pcl::PointCloud<PointT>::Ptr result(new pcl::PointCloud<PointT>);\n        pcl::copyPointCloud(*cloud, *result);\n\n        std::sort(result->begin(), result->end(), [](PointT a, PointT b) -> bool\n        {\n            float value = 100000;\n            return a.y * value + a.x > b.y * value + b.x;\n        });\n\n        return result;\n    }\n\n    template<typename PointT, typename NormalT>\n    typename pcl::PointCloud<NormalT>::Ptr ComputeNormals(typename pcl::PointCloud<PointT>::Ptr &cloud, int kSearch = 30, int radiusSearch = 0)\n    {\n        typename pcl::PointCloud<NormalT>::Ptr result(new pcl::PointCloud<NormalT>);\n\n        typedef pcl::search::KdTree<PointT> Search;\n        typename Search::Ptr search(new Search);\n\n        pcl::NormalEstimation<PointT, NormalT> estimation;\n        estimation.setSearchMethod(search);\n        if (kSearch != 0)\n        {\n            estimation.setKSearch(kSearch);\n        }\n        else if (radiusSearch != 0)\n        {\n            estimation.setRadiusSearch(radiusSearch);\n        }\n        else\n        {\n            throw HPEException(\"ComputeNormals : Specify either kSearch or radiusSearch\");\n        }\n\n        estimation.setInputCloud(cloud);\n        estimation.compute(*result);\n        return result;\n    }\n\n\n    template<typename PointT>\n    typename pcl::PointCloud<PointT>::Ptr Voxelize(typename pcl::PointCloud<PointT>::Ptr cloud, float leafSize)\n    {\n        typename pcl::PointCloud<PointT>::Ptr result(new pcl::PointCloud<PointT>);\n\n        pcl::VoxelGrid<PointT> grid;\n        grid.setLeafSize(leafSize, leafSize, leafSize);\n        grid.setInputCloud(cloud);\n        grid.filter(*result);\n\n        return result;\n    }\n\n    template<typename PointT>\n    typename pcl::PointCloud<PointT>::Ptr RemoveOutliersByDistance(typename pcl::PointCloud<PointT>::Ptr cloud,\n            int numberOfNeighborsToAnalyze, int stdThreshold)\n    {\n        typename pcl::PointCloud<PointT>::Ptr result(new pcl::PointCloud<PointT>);\n\n        pcl::StatisticalOutlierRemoval<PointT> outlierRemoval;\n        outlierRemoval.setInputCloud(cloud);\n        outlierRemoval.setMeanK(numberOfNeighborsToAnalyze);\n        outlierRemoval.setStddevMulThresh(stdThreshold);\n        outlierRemoval.filter(*result);\n\n        return result;\n    }\n\n    template<typename PointT>\n    typename pcl::PointCloud<PointT>::Ptr PassThrough(typename pcl::PointCloud<PointT>::Ptr &cloud,\n            float from, float to,\n            std::string filterName,\n            bool keepOrganzed = true)\n    {\n        typename pcl::PointCloud<PointT>::Ptr result(new typename pcl::PointCloud<PointT>);\n\n        pcl::PassThrough<PointT> filter;\n        filter.setFilterLimits(from, to);\n        if (cloud->isOrganized())\n        {\n            filter.setKeepOrganized(keepOrganzed);\n        }\n        filter.setFilterFieldName(filterName);\n        filter.setInputCloud(cloud);\n        filter.filter(*result);\n\n        return result;\n    }\n\n    template<typename PointT>\n    typename pcl::PointCloud<PointT>::Ptr ResampleMLS(typename pcl::PointCloud<PointT>::Ptr &cloud, int polynomialOrder = 2, float searchRadius = 0.005)\n    {\n        typename pcl::PointCloud<PointT>::Ptr result(new pcl::PointCloud<PointT>);\n\n        typename pcl::search::KdTree<PointT>::Ptr search(new pcl::search::KdTree<PointT>);\n        pcl::MovingLeastSquares<PointT, PointT> mls;\n\n        mls.setInputCloud(cloud);\n        mls.setPolynomialFit(true);\n        mls.setPolynomialOrder(polynomialOrder);\n        mls.setSearchMethod(search);\n        mls.setSearchRadius(searchRadius);\n\n        mls.process(*result);\n\n        return result;\n    }\n\n}\n\n#endif // FILTERS_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/FunctorFilter.h",
    "content": "#ifndef FUNCTORFILTER_H\n#define FUNCTORFILTER_H\n\n#include <functional>\n\n#include <pcl/point_cloud.h>\n#include \"Filter/ICloudFilter.h\"\n\nnamespace hpe\n{\n    /**\n     \\class\tFunctorFilter\n    \n     \\brief\tA filter that performes the operation defined inside a functor. Clearly\n\t\t\tthe functor should conform to Functor format.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n    \n     \\tparam\tPointType\tType of the point type.\n     */\n\n    template<typename PointType>\n    class FunctorFilter : public ICloudFilter<PointType>\n    {\n        public:\n            typedef typename std::function<typename pcl::PointCloud<PointType>::Ptr(typename pcl::PointCloud<PointType>::Ptr &)> Functor;\n            typedef std::shared_ptr<FunctorFilter> Ptr;\n\n            FunctorFilter(Functor functor)\n                : m_functor(functor)\n            {\n\n            }\n\n            typename pcl::PointCloud<PointType>::Ptr Filter(typename pcl::PointCloud<PointType>::Ptr &cloud)\n            {\n                return m_functor(cloud);\n            }\n\n        private:\n            Functor m_functor;\n\n    };\n}\n\n#endif // FUNCTORFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/ICenterCloudFilter.h",
    "content": "#ifndef ICENTERCLOUDFILTER_H\n#define ICENTERCLOUDFILTER_H\n\n#include <Interface/IInterface.h>\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n    template<typename PointType>\n    class ICenterCloudFilter : public IInterface\n    {\n        public:\n            typedef pcl::PointCloud<PointType> Cloud;\n\n            virtual typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud, float x, float y, float z) = 0;\n    };\n}\n\n#endif // ICENTERCLOUDFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/ICloudFilter.h",
    "content": "#ifndef ICLOUDFILTER_H\n#define ICLOUDFILTER_H\n\n#include <Interface/IInterface.h>\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tICloudFilter\n    \n     \\brief\tA generic interface that takes a cloud and produces another cloud.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n    \n     \\tparam\tPointType\tType of the point type.\n     */\n\n    template<typename PointType>\n    class ICloudFilter : public IInterface\n    {\n        public:\n            typedef pcl::PointCloud<PointType> Cloud;\n\n            virtual typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud) = 0;\n    };\n}\n\n#endif // ICLOUDFILTER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Filter/MovingLeastSquaresFilter.h",
    "content": "#pragma once\n\n#ifndef MOVINGLEASTSQUARESFILTER_H\n#define MOVINGLEASTSQUARESFILTER_H\n\n#include <Filter/ICloudFilter.h>\n\n#include <pcl/search/kdtree.h>\n//#define PCL_NO_PRECOMPILE\n#include <pcl/surface/mls.h>\n//#undef PCL_NO_PRECOMPILE\n\nnamespace hpe\n{\n    template <class PointType>\n    class MovingLeastSquaresFilter : public ICloudFilter<PointType>\n    {\n        public:\n            typedef typename ICloudFilter<PointType>::Cloud Cloud;\n\n            typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud)\n            {\n                typename Cloud::Ptr result(new Cloud);\n\n                typename pcl::search::KdTree<PointType>::Ptr search(new pcl::search::KdTree<PointType>);\n                typename pcl::MovingLeastSquares<PointType, PointType> mls;\n\n                search->setInputCloud(cloud);\n\n                mls.setInputCloud(cloud);\n                mls.setPolynomialFit(true);\n                mls.setPolynomialOrder(3);\n                mls.setSearchMethod(search);\n                mls.setSearchRadius(0.015);\n                mls.setUpsamplingMethod(mls.VOXEL_GRID_DILATION);\n                mls.setDilationVoxelSize(0.001);\n\n                mls.process(*result);\n\n                return result;\n            }\n    };\n}\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/GrabberBase.cpp",
    "content": "#include \"GrabberBase.h\"\n\n#include <DataObject/RawFrames.h>\n#include <pcl/common/time.h>\n\nnamespace hpe\n{\n\n    GrabberBase::GrabberBase()\n    {\n    }\n\n    void GrabberBase::Start()\n    {\n        PreRun();\n\n        for (auto p = m_processors.begin(); p != m_processors.end(); p++)\n        {\n            IProcessor::Ptr processor = *p;\n            processor->Init();\n        }\n\n        m_run = true;\n        while (m_run)\n        {\n            //TODO make thread safe\n            for (auto p = m_processorsToAdd.begin(); p != m_processorsToAdd.end(); p++)\n            {\n                IProcessor::Ptr processor = *p;\n                processor->Init();\n                m_processors.push_back(processor);\n            }\n            m_processorsToAdd.clear();\n\n            RawFrames::Ptr rawFrames(new RawFrames);\n            bool success = GetNextFrame(rawFrames->colorFrame, rawFrames->depthFrame);\n            if (success == false)\n            {\n                break;\n            }\n\n            IDataStorage::Ptr storage = CreateDataStorage();\n            storage->Set(\"RawFrames\", rawFrames);\n\n            for (auto p = m_processors.begin(); p != m_processors.end(); p++)\n            {\n                IProcessor::Ptr processor = *p;\n                processor->Process(storage);\n            }\n        }\n\n        for (auto p = m_processors.begin(); p != m_processors.end(); p++)\n        {\n            IProcessor::Ptr processor = *p;\n            processor->Cleanup();\n        }\n    }\n\n    void GrabberBase::Stop()\n    {\n        m_run = false;\n    }\n\n    void GrabberBase::AddProcessor(IProcessor::Ptr processor)\n    {\n        m_processorsToAdd.push_back(processor);\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/GrabberBase.h",
    "content": "#pragma once\n\n#ifndef GRABBERBASE_H\n#define GRABBERBASE_H\n\n#include <opencv2/opencv.hpp>\n\n#include <DataObject/IDataObject.h>\n#include <DataObject/IDataStorage.h>\n#include <Processor/IProcessor.h>\n\n#include <vector>\n\nnamespace hpe\n{\n    /**\n     \\class\tGrabberBase\n    \n     \\brief\tA generic component to decouple technical sensor details from the \n\t\t\trecognition framework. Typically it should be subclassed for any \n\t\t\tsensor.\n\n\t\t\tAny grabber takes a sequence of processors to that perform a desired \n\t\t\tfunctionality. Basically, recognition methods do not know anything about \n\t\t\tprocessors and grabbers, they just do recognition. Grabber doesn't know \n\t\t\tanything about recognition methods, but knows how to call processors,\n\t\t\twhich in turn do not know anything about technical details of the \n\t\t\tsensor, but rather accept the data from the grabber, process it,\n\t\t\tand pass forward.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class GrabberBase\n    {\n        public:\n            GrabberBase();\n            void Start();\n            void Stop();\n            void AddProcessor(IProcessor::Ptr processor);\n\n        protected:\n            virtual bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) = 0;\n            virtual IDataStorage::Ptr CreateDataStorage() = 0;\n            virtual void PreRun() = 0;\n\n        private:\n            bool m_run;\n            std::vector<IProcessor::Ptr> m_processors;\n            std::vector<IProcessor::Ptr> m_processorsToAdd;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/OpenNIGrabber.cpp",
    "content": "#include \"OpenNIGrabber.h\"\n\n#include <DataObject/MapDataStorage.h>\n\nnamespace hpe\n{\n\n    OpenNIGrabber::OpenNIGrabber(void)\n    {\n    }\n\n    OpenNIGrabber::~OpenNIGrabber(void)\n    {\n    }\n\n    void OpenNIGrabber::PreRun()\n    {\n        m_openniGrabber = std::shared_ptr<pcl::OpenNIGrabber>(new pcl::OpenNIGrabber());\n        m_haveFrames = false;\n\n        boost::function<void (const boost::shared_ptr<openni_wrapper::Image>&, const boost::shared_ptr<openni_wrapper::DepthImage>&, float)> f =\n            boost::bind(&OpenNIGrabber::GrabberCallback, this, _1, _2, _3);\n\n        m_openniGrabber->registerCallback(f);\n        m_openniGrabber->start();\n    }\n\n    hpe::IDataStorage::Ptr OpenNIGrabber::CreateDataStorage()\n    {\n        return IDataStorage::Ptr(new MapDataStorage);\n    }\n\n    cv::Mat OpenNIGrabber::GetColorFrame(const boost::shared_ptr<openni_wrapper::Image> &img)\n    {\n        cv::Mat frameRGB = cv::Mat(img->getHeight(), img->getWidth(), CV_8UC3);\n\n        img->fillRGB(frameRGB.cols, frameRGB.rows, frameRGB.data, frameRGB.step);\n        cv::Mat frameBGR;\n        cv::cvtColor(frameRGB, frameBGR, cv::COLOR_RGB2BGR);\n        return frameBGR;\n    }\n\n    cv::Mat OpenNIGrabber::GetDepthFrame(const boost::shared_ptr<openni_wrapper::DepthImage> &depthImg)\n    {\n        cv::Mat frameDepth = cv::Mat(depthImg->getHeight(), depthImg->getWidth(), CV_32FC1);\n        depthImg->fillDepthImage(frameDepth.cols, frameDepth.rows, (float *)frameDepth.data, frameDepth.step);\n        return frameDepth;\n    }\n\n    void OpenNIGrabber::GrabberCallback(const boost::shared_ptr<openni_wrapper::Image> &color, const boost::shared_ptr<openni_wrapper::DepthImage> &depth, float c)\n    {\n        (void)c;\n\n        cv::Mat colorMat = GetColorFrame(color);\n        cv::Mat depthMat = GetDepthFrame(depth);\n\n        colorMat.copyTo(m_colorFrame);\n        depthMat.copyTo(m_depthFrame);\n\n        m_haveFrames = true;\n    }\n\n    bool OpenNIGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame)\n    {\n        while (m_haveFrames == false)\n        {\n            boost::this_thread::sleep(boost::posix_time::milliseconds(10));\n        }\n        m_haveFrames = false;\n\n        m_colorFrame.copyTo(colorFrame);\n        m_depthFrame.copyTo(depthFrame);\n\n        return true;\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/OpenNIGrabber.h",
    "content": "#pragma once\n#ifndef OPENNIGRABBER_H\n#define OPENNIGRABBER_H\n\n#include <Grabber/GrabberBase.h>\n\n#include <pcl/io/openni_grabber.h>\n#include <DataObject/IDataStorage.h>\n#include <boost/smart_ptr/shared_ptr.hpp>\n#include <opencv2/core/core.hpp>\n\nnamespace hpe\n{\n\n    class OpenNIGrabber : public GrabberBase\n    {\n        public:\n            OpenNIGrabber(void);\n            ~OpenNIGrabber(void);\n\n        protected:\n            void PreRun();\n            hpe::IDataStorage::Ptr CreateDataStorage();\n            bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame);\n\n        private:\n            cv::Mat GetColorFrame(const boost::shared_ptr<openni_wrapper::Image> &img);\n            cv::Mat GetDepthFrame(const boost::shared_ptr<openni_wrapper::DepthImage> &depthImg);\n\n            void GrabberCallback(const boost::shared_ptr<openni_wrapper::Image> &color, const boost::shared_ptr<openni_wrapper::DepthImage> &depth, float c);\n\n            std::shared_ptr<pcl::OpenNIGrabber> m_openniGrabber;\n            cv::Mat m_colorFrame;\n            cv::Mat m_depthFrame;\n            bool m_haveFrames;\n    };\n\n}\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/PCDGrabber.cpp",
    "content": "#include \"PCDGrabber.h\"\n\n#include <DataObject/MapDataStorage.h>\n#include <DataObject/CloudXYZRGBA.h>\n#include <boost/format.hpp>\n#include <pcl/io/pcd_io.h>\n\nPCDGrabber::PCDGrabber(void)\n    : m_frameNumber(0), m_endNumber(100), m_storeFolder(\"clouds\")\n{\n}\n\nPCDGrabber::PCDGrabber(int start, int end)\n    : m_frameNumber(start), m_endNumber(end), m_storeFolder(\"clouds\")\n{\n}\n\nPCDGrabber::PCDGrabber(std::string folder)\n    : m_frameNumber(0), m_endNumber(0), m_storeFolder(folder)\n{\n}\n\nPCDGrabber::PCDGrabber(int start, int end, std::string folder)\n    : m_frameNumber(start), m_endNumber(end), m_storeFolder(folder)\n{\n}\n\nPCDGrabber::~PCDGrabber(void)\n{\n}\n\nvoid PCDGrabber::PreRun()\n{\n    m_frameNumber = 1;\n    EnumerateFiles();\n}\n\nhpe::IDataStorage::Ptr PCDGrabber::CreateDataStorage()\n{\n    using namespace hpe;\n    IDataStorage::Ptr result(new MapDataStorage);\n    CloudXYZRGBA::Ptr cloudObj(new CloudXYZRGBA(true));\n    std::string path = m_pcdFiles[m_frameNumber];\n    m_frameNumber += 1;\n    pcl::io::loadPCDFile(path, *(cloudObj->cloud));\n    result->Set(\"Cloud\", cloudObj);\n\n    CloudFileInfo::Ptr fileinfo(new CloudFileInfo);\n    fileinfo->Filename = path;\n    result->Set(\"FileInfo\", fileinfo);\n\n    return result;\n}\n\nbool PCDGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame)\n{\n    return m_frameNumber < m_pcdFiles.size();\n}\n\nstruct SortNumerically\n{\n    template <typename T>\n    T st2num(const std::string &Text)\n    {\n        std::stringstream ss(Text);\n        T result;\n        return ss >> result ? result : 0;\n    }\n\n    bool operator()(const std::string &a, const std::string &b)\n    {\n        namespace fs = boost::filesystem;\n        fs::path p1 = fs::path(a);\n        fs::path p2 = fs::path(b);\n\n        int v1 = st2num<int>(p1.stem().string());\n        int v2 = st2num<int>(p2.stem().string());\n\n        return v1 < v2;\n    }\n};\n\nvoid PCDGrabber::EnumerateFiles()\n{\n    namespace fs = boost::filesystem;\n\n    fs::path rootFolder(m_storeFolder);\n\n    if (fs::exists(rootFolder))\n    {\n        if (fs::is_directory(rootFolder))\n        {\n            auto it = fs::directory_iterator(rootFolder);\n            for (; it != fs::directory_iterator(); it++)\n            {\n                auto fileName = *it;\n                if (fs::is_regular(fileName))\n                {\n                    auto path = fileName.path();\n                    fs::path extension = path.extension();\n                    auto filename = path.filename().string();\n                    std::string p = path.string();\n                    if (extension == \".pcd\")\n                    {\n                        m_pcdFiles.push_back(p);\n                    }\n                }\n            }\n        }\n    }\n\n    SortNumerically s;\n    std::sort(m_pcdFiles.begin(), m_pcdFiles.end(), s);\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/PCDGrabber.h",
    "content": "#pragma once\n\n#include <Grabber/GrabberBase.h>\n#include <DataObject/IDataObject.h>\n\nclass PCDGrabber : public hpe::GrabberBase\n{\n    public:\n        PCDGrabber(void);\n        PCDGrabber(int start, int end);\n        PCDGrabber(int start, int end, std::string folder);\n        PCDGrabber(std::string folder);\n        ~PCDGrabber(void);\n\n        class CloudFileInfo : public hpe::IDataObject\n        {\n            public:\n                typedef std::shared_ptr<CloudFileInfo> Ptr;\n                std::string Filename;\n        };\n\n    protected:\n        void PreRun();\n        hpe::IDataStorage::Ptr CreateDataStorage();\n        bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame);\n\n    private:\n        void EnumerateFiles();\n\n        std::vector<std::string> m_pcdFiles;\n\n        std::string m_storeFolder;\n        int m_frameNumber;\n        int m_endNumber;\n};\n\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/ProviderGrabber.cpp",
    "content": "#include \"ProviderGrabber.h\"\n\n#include <Exception/HPEException.h>\n#include <DataObject/MapDataStorage.h>\n\nnamespace hpe\n{\n\n    ProviderGrabber::ProviderGrabber(void)\n        : m_currentFrame(0)\n    {\n    }\n\n\n    ProviderGrabber::~ProviderGrabber(void)\n    {\n    }\n\n    bool ProviderGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame)\n    {\n        if (m_currentFrame >= m_provider->GetFrameCount())\n        {\n            return false;\n        }\n\n        colorFrame = m_provider->GetBgr(m_currentFrame);\n        depthFrame = m_provider->GetDepth(m_currentFrame);\n        m_currentFrame += 1;\n\n        return true;\n    }\n\n    IDataStorage::Ptr ProviderGrabber::CreateDataStorage()\n    {\n        IDataStorage::Ptr result(new MapDataStorage);\n        return result;\n    }\n\n    void ProviderGrabber::SetProvider(std::shared_ptr<ProviderType> provider)\n    {\n        m_provider = provider;\n    }\n\n    void ProviderGrabber::PreRun()\n    {\n        if (m_provider.get() == nullptr)\n        {\n            throw HPEException(\"ProviderGrabber::PreRun - m_provider == nullptr\");\n        }\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Grabber/ProviderGrabber.h",
    "content": "#pragma once\n\n#ifndef PROVIDERGRABBER_H\n#define PROVIDERGRABBER_H\n\n#include <Grabber/GrabberBase.h>\n#include <Provider/IDataProvider.h>\n#include <pcl/point_types.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tProviderGrabber\n    \n     \\brief\tA grabber that imitates usual grabber functionality,\n\t\t\tbut read the frames from the the Provider, but not a \n\t\t\tsensor.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class ProviderGrabber : public GrabberBase\n    {\n        public:\n            typedef IDataProvider<pcl::PointXYZRGBA> ProviderType;\n\n            ProviderGrabber(void);\n            ~ProviderGrabber(void);\n\n            void SetProvider(std::shared_ptr<ProviderType> provider);\n\n        protected:\n            bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame);\n            IDataStorage::Ptr CreateDataStorage();\n            void PreRun();\n\n        private:\n            std::shared_ptr<ProviderType> m_provider;\n            int m_currentFrame;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/HeadPoseEstimationFramework.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>HeadPoseEstimationFramework</RootNamespace>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <TargetName>$(ProjectName)_$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <LinkIncremental>false</LinkIncremental>\n    <TargetName>$(ProjectName)_$(Configuration)</TargetName>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <MultiProcessorCompilation>true</MultiProcessorCompilation>\n      <MinimalRebuild>false</MinimalRebuild>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <MultiProcessorCompilation>true</MultiProcessorCompilation>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup>\n    <ClCompile Include=\"Align\\CloudMapper.cpp\" />\n    <ClCompile Include=\"Align\\IncrementalHeadTemplateCreator.cpp\" />\n    <ClCompile Include=\"Align\\TransformationEstimationHPE.cpp\" />\n    <ClCompile Include=\"Detector\\DepthFrameDetector\\csvMat.cpp\" />\n    <ClCompile Include=\"Detector\\DepthFrameDetector\\DepthDetector.cpp\" />\n    <ClCompile Include=\"Detector\\DepthFrameDetector\\LeafNode.cpp\" />\n    <ClCompile Include=\"Detector\\DepthFrameDetector\\localFunctions.cpp\" />\n    <ClCompile Include=\"Detector\\DepthFrameDetector\\Tree.cpp\" />\n    <ClCompile Include=\"FacialExpressions\\ferLocalFunctions.cpp\" />\n    <ClCompile Include=\"Features\\Calculator\\CylinderOptimizedFeatureCalculator.cpp\" />\n    <ClCompile Include=\"Features\\Sampler\\CylinderSampler.cpp\" />\n    <ClCompile Include=\"Grabber\\PCDGrabber.cpp\" />\n    <ClCompile Include=\"Helpers\\HpeHelpers.cpp\" />\n    <ClCompile Include=\"Processor\\DetectorProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\FacialExpressionsProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\FilterProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\SaveCloudProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\TrackingProcessor.cpp\" />\n    <ClCompile Include=\"WindowsOnly\\Processor\\KinectSDKConverterProcessor.cpp\" />\n    <ClCompile Include=\"WindowsOnly\\Grabber\\KinectSDKGrabber.cpp\" />\n    <ClCompile Include=\"Processor\\DepthPreprocessingProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\HeadExtractionProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\ConverterProcessor.cpp\" />\n    <ClCompile Include=\"DataObject\\MapDataStorage.cpp\" />\n    <ClCompile Include=\"Converter\\KinectDataConverter.cpp\" />\n    <ClCompile Include=\"Exception\\HPEException.cpp\" />\n    <ClCompile Include=\"Grabber\\GrabberBase.cpp\" />\n    <ClCompile Include=\"Grabber\\ProviderGrabber.cpp\" />\n    <ClCompile Include=\"Settings\\HPESettings.cpp\" />\n    <ClCompile Include=\"Processor\\TemplateCreatorProcessor.cpp\" />\n    <ClCompile Include=\"Processor\\ShowCloudProcessor.cpp\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"Align\\CloudMapper.h\" />\n    <ClInclude Include=\"Align\\IHeadTemplateCreator.h\" />\n    <ClInclude Include=\"Align\\TemplateAlignment.h\" />\n    <ClInclude Include=\"Align\\TransformationEstimationHPE.h\" />\n    <ClInclude Include=\"DataObject\\CloudXYZRGBA.h\" />\n    <ClInclude Include=\"Align\\IncrementalHeadTemplateCreator.h\" />\n    <ClInclude Include=\"DataObject\\HeadPoseInfo.h\" />\n    <ClInclude Include=\"DataObject\\LandmarksObject.h\" />\n    <ClInclude Include=\"Detector\\DepthFrameDetector\\csvMat.h\" />\n    <ClInclude Include=\"Detector\\DepthFrameDetector\\DepthDetector.h\" />\n    <ClInclude Include=\"Detector\\DepthFrameDetector\\LeafNode.h\" />\n    <ClInclude Include=\"Detector\\DepthFrameDetector\\localFunctions.h\" />\n    <ClInclude Include=\"Detector\\DepthFrameDetector\\Tree.h\" />\n    <ClInclude Include=\"FacialExpressions\\ferLocalFunctions.h\" />\n    <ClInclude Include=\"Features\\Calculator\\CylinderOptimizedFeatureCalculator.h\" />\n    <ClInclude Include=\"Features\\Sampler\\CylinderSampler.h\" />\n    <ClInclude Include=\"Grabber\\PCDGrabber.h\" />\n    <ClInclude Include=\"Filter\\MovingLeastSquaresFilter.h\" />\n    <ClInclude Include=\"Helpers\\HpeHelpers.h\" />\n    <ClInclude Include=\"Processor\\DetectorProcessor.h\" />\n    <ClInclude Include=\"Processor\\FacialExpressionsProcessor.h\" />\n    <ClInclude Include=\"Processor\\FilterProcessor.h\" />\n    <ClInclude Include=\"Processor\\SaveCloudProcessor.h\" />\n    <ClInclude Include=\"Processor\\TrackingProcessor.h\" />\n    <ClInclude Include=\"rapidjson\\allocators.h\" />\n    <ClInclude Include=\"rapidjson\\document.h\" />\n    <ClInclude Include=\"rapidjson\\encodedstream.h\" />\n    <ClInclude Include=\"rapidjson\\encodings.h\" />\n    <ClInclude Include=\"rapidjson\\error\\en.h\" />\n    <ClInclude Include=\"rapidjson\\error\\error.h\" />\n    <ClInclude Include=\"rapidjson\\filereadstream.h\" />\n    <ClInclude Include=\"rapidjson\\filestream.h\" />\n    <ClInclude Include=\"rapidjson\\filewritestream.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\dtoa.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\itoa.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\meta.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\pow10.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\stack.h\" />\n    <ClInclude Include=\"rapidjson\\internal\\strfunc.h\" />\n    <ClInclude Include=\"rapidjson\\memorybuffer.h\" />\n    <ClInclude Include=\"rapidjson\\memorystream.h\" />\n    <ClInclude Include=\"rapidjson\\msinttypes\\inttypes.h\" />\n    <ClInclude Include=\"rapidjson\\msinttypes\\stdint.h\" />\n    <ClInclude Include=\"rapidjson\\prettywriter.h\" />\n    <ClInclude Include=\"rapidjson\\rapidjson.h\" />\n    <ClInclude Include=\"rapidjson\\reader.h\" />\n    <ClInclude Include=\"rapidjson\\stringbuffer.h\" />\n    <ClInclude Include=\"rapidjson\\writer.h\" />\n    <ClInclude Include=\"WindowsOnly\\Processor\\KinectSDKConverterProcessor.h\" />\n    <ClInclude Include=\"WindowsOnly\\Grabber\\KinectSDKGrabber.h\" />\n    <ClInclude Include=\"Processor\\DepthPreprocessingProcessor.h\" />\n    <ClInclude Include=\"Processor\\HeadExtractionProcessor.h\" />\n    <ClInclude Include=\"Processor\\ConverterProcessor.h\" />\n    <ClInclude Include=\"DataObject\\IDataObject.h\" />\n    <ClInclude Include=\"DataObject\\IDataStorage.h\" />\n    <ClInclude Include=\"DataObject\\MapDataStorage.h\" />\n    <ClInclude Include=\"DataObject\\RawFrames.h\" />\n    <ClInclude Include=\"Common.h\" />\n    <ClInclude Include=\"Converter\\IDataConverter.h\" />\n    <ClInclude Include=\"Converter\\KinectDataConverter.h\" />\n    <ClInclude Include=\"DataFilter\\IDataFilter.h\" />\n    <ClInclude Include=\"Descriptors2D\\IDescriptor.h\" />\n    <ClInclude Include=\"Features\\Calculator\\FeatureCalculatorBase.h\" />\n    <ClInclude Include=\"Features\\DepthProjectionFeature.h\" />\n    <ClInclude Include=\"Exception\\HPEException.h\" />\n    <ClInclude Include=\"Features\\Calculator\\FeatureCalculator.h\" />\n    <ClInclude Include=\"Features\\IFeature.h\" />\n    <ClInclude Include=\"Features\\Calculator\\IFeatureCalculator.h\" />\n    <ClInclude Include=\"Features\\Sampler\\IPatchSampler.h\" />\n    <ClInclude Include=\"Features\\ProjectionFeatureBase.h\" />\n    <ClInclude Include=\"Filter\\BoxFilter.h\" />\n    <ClInclude Include=\"Filter\\CenteredBoxFilter.h\" />\n    <ClInclude Include=\"Filter\\FilteringQueue.h\" />\n    <ClInclude Include=\"Filter\\Filters.h\" />\n    <ClInclude Include=\"Filter\\FunctorFilter.h\" />\n    <ClInclude Include=\"Filter\\ICenterCloudFilter.h\" />\n    <ClInclude Include=\"Filter\\ICloudFilter.h\" />\n    <ClInclude Include=\"Grabber\\GrabberBase.h\" />\n    <ClInclude Include=\"GraphicalObjects\\IGraphicalObject.h\" />\n    <ClInclude Include=\"Features\\Sampler\\GridSampler.h\" />\n    <ClInclude Include=\"Features\\RGBProjectionFeature.h\" />\n    <ClInclude Include=\"Processor\\IProcessor.h\" />\n    <ClInclude Include=\"Grabber\\ProviderGrabber.h\" />\n    <ClInclude Include=\"Settings\\HPESettings.h\" />\n    <ClInclude Include=\"Interface\\IInterface.h\" />\n    <ClInclude Include=\"Landmarks.h\" />\n    <ClInclude Include=\"Provider\\IDataProvider.h\" />\n    <ClInclude Include=\"Processor\\ShowCloudProcessor.h\" />\n    <ClInclude Include=\"SurfaceReconstruction\\ISurfaceReconstructor.h\" />\n    <ClInclude Include=\"Processor\\TemplateCreatorProcessor.h\" />\n    <ClInclude Include=\"Tracker\\ICPTemplateTracker.h\" />\n    <ClInclude Include=\"UI\\PointPicker.h\" />\n    <ClInclude Include=\"UI\\ShowCloud.h\" />\n    <ClInclude Include=\"UI\\ShowMesh.h\" />\n    <ClInclude Include=\"UI\\VisualizerCommon.h\" />\n    <ClInclude Include=\"vtkBindings\\MarchingCubes.h\" />\n  </ItemGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Helpers/HpeHelpers.cpp",
    "content": "#include \"HpeHelpers.h\"\n\n\n\nnamespace hpe\n{\n    Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v)\n    {\n        Eigen::Vector3f normed = v / v.norm();\n        float x = normed(0);\n        float y = normed(1);\n        float z = normed(2);\n\n        Eigen::Vector3f result;\n        result(0) = std::atan2(z, x); // yaw\n        result(1) = std::atan2(y, z); // tilt\n        result(2) = std::atan(y / x); // roll\n        return result;\n    }\n\n    Eigen::Vector3f EulerAnglesToVector(double yaw, double tilt, bool inDegrees)\n    {\n        if (inDegrees)\n        {\n            yaw = yaw / 180 * M_PI;\n            tilt = tilt / 180 * M_PI;\n        }\n\n        Eigen::Vector3f newVector;\n        double yawTan = std::tan(yaw);\n        double tiltTan = std::tan(tilt);\n        if (yawTan != 0)\n        {\n            newVector << 1 / yawTan, -tiltTan, 1;\n        }\n        else\n        {\n            newVector << 1000, -tiltTan, 1;\n        }\n\n        return newVector / newVector.norm();\n    }\n\n    cv::Point2f MeanPoint(std::vector<cv::Point2f> points)\n    {\n        cv::Point2f res(0, 0);\n        for (auto pt = points.begin(); pt != points.end(); pt++)\n        {\n            res.x += pt->x;\n            res.y += pt->y;\n        }\n\n        if (points.size() != 0)\n        {\n            res.x /= points.size();\n            res.y /= points.size();\n        }\n\n        return res;\n    }\n\n\n\n\n\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Helpers/HpeHelpers.h",
    "content": "#pragma once\n\n#ifndef HPEHELPERS_H\n#define HPEHELPERS_H\n\n#include <pcl/common/common.h>\n#include <pcl/point_cloud.h>\n#include <pcl/search/kdtree.h>\n#include <pcl/features/normal_3d.h>\n#include <opencv2/opencv.hpp>\n\nnamespace hpe\n{\n    Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v);\n\n    Eigen::Vector3f EulerAnglesToVector(double yaw, double tilt, bool inDegrees = true);\n\n    template <typename PointType>\n    Eigen::Vector3f GetNormal(typename pcl::PointCloud<PointType>::Ptr cloud, int noseInCloud)\n    {\n        Eigen::Vector3f result;\n\n        pcl::search::KdTree<PointType>::Ptr search(new pcl::search::KdTree<PointType>);\n\n        pcl::IndicesPtr idx(new std::vector<int>);\n        idx->push_back(noseInCloud);\n\n        pcl::NormalEstimation<PointType, pcl::Normal> ne;\n        ne.setInputCloud(cloud);\n        ne.setIndices(idx);\n        ne.setKSearch(50);\n        ne.setSearchMethod(search);\n\n        pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud <pcl::Normal>);\n        ne.compute(*normals);\n        result = normals->points[0].getNormalVector3fMap();\n\n        return result;\n    }\n\n    cv::Point2f MeanPoint(std::vector<cv::Point2f> points);\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Interface/IInterface.h",
    "content": "#pragma once\n\n#ifndef IINTERFACE_H\n#define IINTERFACE_H\n\nnamespace hpe\n{\n    class IInterface\n    {\n        public:\n            virtual ~IInterface() {}\n    };\n}\n\n#endif // IINTERFACE_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Landmarks.h",
    "content": "#ifndef LANDMARKS_H\n#define LANDMARKS_H\n\n#include <fstream>\n#include <ostream>\n#include <istream>\n#include <string>\n#include <algorithm>\n\n#include <boost/algorithm/string/split.hpp>\n#include <boost/lexical_cast.hpp>\n#include <boost/algorithm/string/classification.hpp>\n#include <boost/assign.hpp>\n#include <Eigen/Eigen>\n\n#include <Common.h>\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <pcl/search/kdtree.h>\n\nnamespace hpe\n{\n    using namespace boost::assign;\n\n    template <typename PointType>\n    void SaveLandmarks(typename Common<PointType>::Landmarks landmarks, std::string filename)\n    {\n        std::ofstream stream(filename);\n\n        std::for_each(landmarks.begin(), landmarks.end(), [&](typename Common<PointType>::Landmarks::value_type l)\n        {\n            stream << l.index << \" \" << l.point.x << \" \" << l.point.y << \" \" << l.point.z << std::endl;\n        });\n    }\n\n    template <typename PointType>\n    typename Common<PointType>::Landmarks LoadLandmarks(std::string filename)\n    {\n        typename Common<PointType>::Landmarks landmarks;\n        std::ifstream istream(filename.c_str());\n\n        std::string line;\n        while (std::getline(istream, line))\n        {\n            std::vector<std::string> values;\n            boost::split(values, line, boost::is_any_of(\" \\t\"), boost::token_compress_on);\n            typename Common<PointType>::Landmark l;\n            if (values[0] != \"n\")\n            {\n                l.index = boost::lexical_cast<int>(values[0]);\n                l.point.x = boost::lexical_cast<float>(values[1]);\n                l.point.y = boost::lexical_cast<float>(values[2]);\n                l.point.z = boost::lexical_cast<float>(values[3]);\n                landmarks.push_back(l);\n            }\n        }\n        return landmarks;\n    }\n\n    template <typename PointType>\n    typename Common<PointType>::Landmarks ResampleLandmarks(typename Common<PointType>::Landmarks &oldLandmarks,\n            typename pcl::PointCloud<PointType>::Ptr &cloudToResample)\n    {\n        typename Common<PointType>::Landmarks newLandmarks;\n        for (int i = 0; i < oldLandmarks.size(); i++)\n        {\n            typename Common<PointType>::Landmark l;\n            auto oldLandmark = oldLandmarks.at(i);\n            l.index = oldLandmark.index;\n            l.point = cloudToResample->at(l.index);\n            newLandmarks.push_back(l);\n        }\n\n        return newLandmarks;\n    }\n\n    template<typename PointType>\n    typename Common<PointType>::Landmarks ResampleLandmarksByPosition(typename Common<PointType>::Landmarks &oldLandmarks, typename pcl::PointCloud<PointType>::Ptr &cloudToResample)\n    {\n        pcl::search::KdTree<PointType> tree;\n        tree.setInputCloud(cloudToResample);\n\n        typename Common<PointType>::Landmarks newLandmarks;\n\n        for (int i = 0; i < oldLandmarks.size(); i += 1)\n        {\n            std::vector<int> indices;\n            std::vector<float> distances;\n            tree.nearestKSearch(oldLandmarks[i].point, 1, indices, distances);\n\n            typename Common<PointType>::Landmark newLandmark;\n            newLandmark.index = indices[0];\n            newLandmark.point = cloudToResample->points[newLandmark.index];\n\n            newLandmarks.push_back(newLandmark);\n        }\n\n        return newLandmarks;\n    }\n\n\n    template <typename PointType>\n    PointType MeanLandmark(typename Common<PointType>::Landmarks &landmarks)\n    {\n        PointType res;\n\n        for (int i = 0; i < landmarks.size(); i++)\n        {\n            res.getVector3fMap() += landmarks[i].point.getVector3fMap();\n        }\n\n        res.getVector3fMap() /= landmarks.size();\n\n        return res;\n    }\n\n    template <typename PointType>\n    typename Common<PointType>::Landmarks TransformLandmarks(typename Common<PointType>::Landmarks &oldLandmarks, Eigen::Matrix4f transform)\n    {\n        typename pcl::PointCloud<PointType>::Ptr cloud(new pcl::PointCloud<PointType>);\n        for (auto it = oldLandmarks.begin(); it != oldLandmarks.end(); it++)\n        {\n            cloud->push_back(it->point);\n        }\n\n        pcl::transformPointCloud(*cloud, *cloud, transform);\n\n        typename Common<PointType>::Landmarks result;\n        for (int i = 0; i < oldLandmarks.size(); i += 1)\n        {\n            typename Common<PointType>::Landmark l;\n            l.index = oldLandmarks[i].index;\n            l.point = cloud->points[i];\n            result.push_back(l);\n        }\n\n        return result;\n    }\n\n}\n\n#endif // LANDMARKS_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/ConverterProcessor.cpp",
    "content": "#include \"ConverterProcessor.h\"\n#include <Exception/HPEException.h>\n#include <DataObject/RawFrames.h>\n#include <DataObject/CloudXYZRGBA.h>\n\nnamespace hpe\n{\n\n    ConverterProcessor::ConverterProcessor(void)\n        : m_cloudKey(\"Cloud\"), m_rawFramesKey(\"RawFrames\")\n    {\n    }\n\n    ConverterProcessor::ConverterProcessor(ConverterPtr converter)\n        : m_cloudKey(\"Cloud\"), m_rawFramesKey(\"RawFrames\"), m_converter(converter)\n    {\n    }\n\n    ConverterProcessor::~ConverterProcessor(void)\n    {\n    }\n\n    void ConverterProcessor::Init()\n    {\n        if (m_converter.get() == nullptr)\n        {\n            throw HPEException(\"ConverterProcessor::Init - m_converter.get() == nullptr\");\n        }\n    }\n\n    void ConverterProcessor::SetCloudKey(std::string key)\n    {\n        m_cloudKey = key;\n    }\n\n    void ConverterProcessor::SetRawFramesKey(std::string key)\n    {\n        m_rawFramesKey = key;\n    }\n\n    void ConverterProcessor::SetConverter(ConverterPtr converter)\n    {\n        m_converter = converter;\n    }\n\n    void ConverterProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        IDataObject::Ptr object = dataStorage->Get(m_rawFramesKey);\n        RawFrames::Ptr rawFrames = std::dynamic_pointer_cast<RawFrames>(object);\n        if (rawFrames.get() == nullptr)\n        {\n            throw HPEException(\"ConverterProcessor::Process - rawFrames.get() == nullptr\");\n        }\n\n        CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA);\n        cloudObject->cloud = m_converter->DepthRGBToPointCloud(rawFrames->depthFrame, rawFrames->colorFrame);\n\n        dataStorage->Set(m_cloudKey, cloudObject);\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/ConverterProcessor.h",
    "content": "#pragma once\n\n#ifndef CONVERTERPROCESSOR_H\n#define CONVERTERPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Converter/IDataConverter.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tConverterProcessor\n    \n     \\brief\tProcessor that converts RGB-D pair into point cloud.\n\t\t\tThe converter should be provided by the client.\n\n\t\t\tIn general converter contains sensor-specific information, \n\t\t\twhile converter processor is generic.\n\n\t\t\tThe Process(...) method takes input data by using m_rawFramesKey,\n\t\t\tand puts the results to m_cloudKey\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class ConverterProcessor : public IProcessor\n    {\n        public:\n            typedef std::shared_ptr<IDataConverter> ConverterPtr;\n\n            ConverterProcessor(void);\n            ConverterProcessor(ConverterPtr converter);\n            ~ConverterProcessor(void);\n\n            void Init();\n            void Process(IDataStorage::Ptr dataDtorage);\n\n            void SetCloudKey(std::string key);\n            void SetRawFramesKey(std::string key);\n            void SetConverter(ConverterPtr converter);\n\n        private:\n            std::string m_cloudKey;\n            std::string m_rawFramesKey;\n            ConverterPtr m_converter;\n\n            cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat);\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/DepthPreprocessingProcessor.cpp",
    "content": "#include \"DepthPreprocessingProcessor.h\"\n#include <DataObject/RawFrames.h>\n#include <DataObject/CloudXYZRGBA.h>\n#include <Exception/HPEException.h>\n#include <opencv2/opencv.hpp>\nnamespace hpe\n{\n\n    DepthPreprocessingProcessor::DepthPreprocessingProcessor(void)\n        : m_framesKey(\"RawFrames\"), m_erodeSize(9), m_closingSize(20), m_gaussianSize(7), m_gaussianSigma(7), m_distanceThreshold(1.2), m_saveOriginalCloud(false)\n    {\n    }\n\n    DepthPreprocessingProcessor::DepthPreprocessingProcessor(int erodeSize, int closingSize, int gaussianSize, float gaussianSigma, float threshold)\n        : m_framesKey(\"RawFrames\"), m_erodeSize(erodeSize), m_closingSize(closingSize), m_gaussianSize(gaussianSize), m_gaussianSigma(gaussianSigma), m_distanceThreshold(threshold), m_saveOriginalCloud(false)\n    {\n\n    }\n\n    DepthPreprocessingProcessor::~DepthPreprocessingProcessor(void)\n    {\n    }\n\n    void DepthPreprocessingProcessor::Process(IDataStorage::Ptr storage)\n    {\n        RawFrames::Ptr frames = storage->GetAndCastNotNull<RawFrames>(m_framesKey);\n\n        if (m_saveOriginalCloud)\n        {\n            CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA);\n            cloudObject->cloud = m_converter->DepthRGBToPointCloud(frames->depthFrame, frames->colorFrame);\n            storage->Set(\"OriginalCloud\", cloudObject);\n        }\n\n        cv::Mat mask = frames->depthFrame < m_distanceThreshold;\n        cv::Mat convolved = AndMask(frames->depthFrame, mask);\n        cv::GaussianBlur(convolved, convolved, cv::Size(m_gaussianSize, m_gaussianSize), m_gaussianSigma, m_gaussianSigma, cv::BORDER_DEFAULT);\n\n        cv::Mat mask2;\n        cv::Mat closingKernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(m_closingSize, m_closingSize));\n        cv::morphologyEx(mask, mask2, cv::MORPH_CLOSE, closingKernel);\n\n        cv::Mat erodeKernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(m_erodeSize, m_erodeSize));\n        cv::Mat realMask;\n        cv::morphologyEx(mask2, realMask, cv::MORPH_ERODE, erodeKernel);\n        cv::Mat bodyExtracted = AndMask(convolved, realMask);\n\n        bodyExtracted.copyTo(frames->depthFrame);\n    }\n\n    cv::Mat DepthPreprocessingProcessor::AndMask(cv::Mat &matrix, cv::Mat &mat)\n    {\n        if (matrix.size() != mat.size())\n        {\n            throw HPEException(\"cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat) - matrix.size() != mat.size()\");\n        }\n\n        cv::Mat result(matrix.rows, matrix.cols, matrix.type());\n\n        cv::MatIterator_<float> target = result.begin<float>();\n        cv::MatIterator_<float> source = matrix.begin<float>();\n\n        cv::MatIterator_<uchar> maskIterator = mat.begin<uchar>();\n\n        for (; target != result.end<float>(); ++target, ++source, ++ maskIterator)\n        {\n            if (*maskIterator != 0)\n            {\n                *target = *source;\n            }\n            else\n            {\n                *target = std::numeric_limits<float>::quiet_NaN();\n            }\n        }\n\n        return result;\n    }\n\n    void DepthPreprocessingProcessor::SetConverter(ConverterPtr c)\n    {\n        m_converter = c;\n        m_saveOriginalCloud = true;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/DepthPreprocessingProcessor.h",
    "content": "#pragma once\n\n#ifndef DEPTHPREPROCESSINGPROCESSOR_H\n#define DEPTHPREPROCESSINGPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Converter/IDataConverter.h>\n\n#include <opencv2/opencv.hpp>\n\n\nnamespace hpe\n{\n    /**\n     \\class\tDepthPreprocessingProcessor\n    \n     \\brief\tDepth smoothing, closing holes are done here. \n\t\t\tTo get an idea, have a look at the Process code.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class DepthPreprocessingProcessor : public IProcessor\n    {\n        public:\n            typedef std::shared_ptr<IDataConverter> ConverterPtr;\n\n            DepthPreprocessingProcessor(void);\n            DepthPreprocessingProcessor(int erodeSize, int closingSize, int gaussianSize, float gaussianSigma, float threshold);\n            ~DepthPreprocessingProcessor(void);\n\n            void Process(IDataStorage::Ptr storage) override;\n            void SetConverter(ConverterPtr c);\n\n        private:\n            cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat);\n\n            std::string m_framesKey;\n\n            int m_gaussianSize;\n            int m_erodeSize;\n            int m_closingSize;\n            float m_gaussianSigma;\n            float m_distanceThreshold;\n\n            bool m_saveOriginalCloud;\n            ConverterPtr m_converter;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/DetectorProcessor.cpp",
    "content": "#include \"DetectorProcessor.h\"\n\n#include <DataObject/RawFrames.h>\n#include <DataObject/HeadPoseInfo.h>\n\n#include <Converter/KinectDataConverter.h>\n\nnamespace hpe\n{\n\n    DetectorProcessor::DetectorProcessor(void)\n        : m_detector(\"DetectorData\")\n    {\n    }\n\n    DetectorProcessor::DetectorProcessor(std::string dataDir)\n        : m_detector(dataDir)\n    {\n    }\n\n    DetectorProcessor::~DetectorProcessor(void)\n    {\n    }\n\n    void DetectorProcessor::Process(hpe::IDataStorage::Ptr dataStorage)\n    {\n        hpe::RawFrames::Ptr frames = dataStorage->GetAndCastNotNull<hpe::RawFrames>(\"RawFrames\", \"DetectorProcessor::Process - can't find raw frames\");\n\n        hpe::headPoseInfo detectorInfo = m_detector.Detect(frames->depthFrame);\n\n        hpe::HeadPoseInformation::Ptr info(new hpe::HeadPoseInformation);\n        if (detectorInfo.x != 0 && detectorInfo.y != 0)\n        {\n            info->DepthFrameX = detectorInfo.x;\n            info->DepthFrameY = detectorInfo.y;\n\n            float depthValue = frames->depthFrame.at<float>(detectorInfo.y, detectorInfo.x);\n            KinectDataConverter converter;\n            pcl::PointXYZ pt = converter.ConvertOnePoint(detectorInfo.x, detectorInfo.y, depthValue, cv::Size(frames->depthFrame.cols, frames->depthFrame.rows));\n\n            info->WorldX = pt.x;\n            info->WorldY = pt.y;\n            info->WorldZ = pt.z;\n\n            if (detectorInfo.tilt != -100 && detectorInfo.yaw != -100)\n            {\n                info->Tilt = detectorInfo.tilt;\n                info->Yaw = detectorInfo.yaw;\n                info->HasHeadInfo = true;\n            }\n        }\n\n\n        dataStorage->Set(\"HeadPoseInfo\", info);\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/DetectorProcessor.h",
    "content": "#pragma once\n\n#ifndef DETECTORPROCESSOR_H\n#define DETECTORPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Detector/DepthFrameDetector/DepthDetector.h>\n\nnamespace hpe\n{\n\n    class DetectorProcessor : public hpe::IProcessor\n    {\n        public:\n            DetectorProcessor(void);\n            DetectorProcessor(std::string dataDir);\n            ~DetectorProcessor(void);\n\n            void Process(hpe::IDataStorage::Ptr dataStorage);\n\n        private:\n            hpe::DepthDetector m_detector;\n    };\n\n}\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/FacialExpressionsProcessor.cpp",
    "content": "#include \"FacialExpressionsProcessor.h\"\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <DataObject/LandmarksObject.h>\n\n#include <UI/ShowCloud.h>\n\n#include <Helpers/HpeHelpers.h>\n\n#include <boost/format.hpp>\n\nnamespace hpe\n{\n    void FacialExpressionProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        if (m_dataReady)\n        {\n            m_facialExpressionReady(m_ferResult);\n            m_dataReady = false;\n        }\n\n        if (m_workerBusy == false)\n        {\n            CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast<CloudXYZRGBA>(\"OriginalCloud\");\n            if (cloudObject.get() == nullptr)\n            {\n                cloudObject = dataStorage->GetAndCastNotNull<CloudXYZRGBA>(\"Cloud\", \"FacialExpressionProcessor::Process - cloud is null\");\n            }\n            LandmarksObject<pcl::PointXYZRGBA>::Ptr landmarksObject = dataStorage->GetAndCastNotNull<LandmarksObject<pcl::PointXYZRGBA>>(\"Landmarks\", \"FacialExpressionProcessor::Process - landmarks are null\");\n\n            Common<PointT>::Landmarks l;\n            for (int i = 0; i < landmarksObject->landmarks.size(); i += 1)\n            {\n                PointT pt;\n                pt.getVector3fMap() = landmarksObject->landmarks[i].point.getVector3fMap();\n                l.push_back(Common<PointT>::Landmark(landmarksObject->landmarks[i].index, pt));\n            }\n\n            m_landmarks = l;\n            pcl::copyPointCloud(*(cloudObject->cloud), m_cloud);\n\n            m_workerBusy = true;\n        }\n\n    }\n\n    std::shared_ptr<CylinderSampler> FacialExpressionProcessor::CreateCylindricalSampler()\n    {\n        /*\n        PhiRange=0.05,0.95\n        ZRange=1.3,0.9\n        TopRows=45\n        BottomRows=105\n        SampleColumns=120\n        */\n        CylinderSampler::Range phiRange;\n        phiRange.first = 0.05;\n        phiRange.second = 0.95;\n\n        CylinderSampler::Range zRange;\n        zRange.first = 0.9;\n        zRange.second = 1.3;\n\n        int topRows = 45;\n        int bottomRows = 105;\n        int sampleColumns = 120;\n\n        std::vector<int> leftEyeIndices;\n        leftEyeIndices.push_back(0);\n        std::vector<int> rightEyeIndices;\n        rightEyeIndices.push_back(1);\n\n        std::shared_ptr<CylinderSampler> sampler(new CylinderSampler(phiRange, zRange, topRows, bottomRows, sampleColumns));\n        sampler->SetVisualize(false);\n        sampler->SetSupportOptimizedSampling(true);\n        sampler->SetEyeIndices(leftEyeIndices, rightEyeIndices);\n\n        return sampler;\n    }\n\n    FacialExpressionProcessor::FacialExpressionProcessor()\n    {\n        Init(\"FERData\");\n    }\n\n    FacialExpressionProcessor::FacialExpressionProcessor(std::string dataFolder)\n    {\n        Init(dataFolder);\n    }\n\n    //called in constructors\n    void FacialExpressionProcessor::Init(std::string dataFolder)\n    {\n        m_workerBusy = false;\n        m_dataReady = false;\n\n        std::string paramsDir = (boost::format(\"%s/params/\") % dataFolder).str();\n        std::string forestDir = (boost::format(\"%s/trees/\") % dataFolder).str();\n\n        fer::RV_readParamList(paramsDir, &m_ferParameters);\n\n        m_forests = fer::RV_readAllForests(forestDir, m_ferParameters);\n\n        m_numberOfClasses = 3;\n    }\n\n\n    //called before grabber starts to send frames\n    void FacialExpressionProcessor::Init()\n    {\n        std::shared_ptr<hpe::CylinderSampler> sampler = CreateCylindricalSampler();\n        m_featureCalculator = std::shared_ptr<hpe::CylinderOptimizedFeatureCalculator<PointT, NormalT>>(new hpe::CylinderOptimizedFeatureCalculator<PointT, NormalT>(sampler, hpe::FeatureType::Depth));\n        m_featureCalculator->SetFeatureSize(cv::Size(120, 150));\n\n        boost::thread t(boost::bind(&FacialExpressionProcessor::ThreadRoutine, this));\n    }\n\n    void FacialExpressionProcessor::ThreadRoutine()\n    {\n        while (true)\n        {\n            while (m_workerBusy == false)\n            {\n                boost::this_thread::sleep(boost::posix_time::milliseconds(10));\n            }\n\n            auto l = m_landmarks;\n\n            pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>);\n\n            Eigen::Vector3f zAxis;\n            zAxis << 0, 0, 1;\n            Eigen::Vector3f normal = l[3].point.getVector3fMap() - l[2].point.getVector3fMap();\n\n            Eigen::Quaternionf rotation;\n            rotation.setFromTwoVectors(normal, zAxis);\n            Eigen::Matrix4f transformRotation = Eigen::Matrix4f::Identity();\n            transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix();\n            pcl::transformPointCloud(m_cloud, *cloud, transformRotation);\n\n            l = TransformLandmarks<PointT>(l, transformRotation);\n\n            Eigen::Vector3f xAxis;\n            xAxis << 1, 0, 0;\n            Eigen::Vector3f eyesLine = l[1].point.getVector3fMap() - l[0].point.getVector3fMap();\n\n            rotation.setFromTwoVectors(eyesLine, xAxis);\n            transformRotation = Eigen::Matrix4f::Identity();\n            transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix();\n            pcl::transformPointCloud(*cloud, *cloud, transformRotation);\n\n            l = TransformLandmarks<PointT>(l, transformRotation);\n\n            m_featureCalculator->SetLandmarks(l);\n\n            cv::Mat features = m_featureCalculator->GetFeatures(cloud);\n            cv::Mat unrolled = features.reshape(0, 120);\n            unrolled.convertTo(unrolled, CV_64FC1);\n            cv::Mat frame = fer::RV_preprocessDepthFrame(unrolled);\n\n            std::vector<double> bgPerc(m_ferParameters.noPatches);\n            bgPerc.assign(m_ferParameters.noPatches, 0);\n\n            std::vector<cv::Mat> featData;\n            fer::RV_featExtractionSingleFrame(frame, m_ferParameters, &bgPerc, featData);\n\n            std::vector<double> weights(featData.size());\n            weights.assign(featData.size(), 1.0 / featData.size());\n            std::vector<double> finalProbs(m_numberOfClasses);\n            finalProbs.assign(m_numberOfClasses, 0);\n\n            for (int i = 0; i < featData.size(); i++)\n            {\n                cv::Mat localData = featData[i];\n                std::vector<fer::randomTree> localForest = m_forests[i];\n                if (bgPerc[i] < 0.5)\n                {\n                    std::vector<double> probs = RV_testForest(localData, localForest, m_ferParameters, m_numberOfClasses);\n                    for (int k = 0; k < m_numberOfClasses; k++)\n                    {\n                        finalProbs[k] += probs[k] * weights[i];\n                    }\n                }\n            }\n\n            m_ferResult = finalProbs;\n            m_dataReady = true;\n            m_workerBusy = false;\n        }\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/FacialExpressionsProcessor.h",
    "content": "#pragma once\n\n#ifndef FACIALEXPRESSIONSPROCESSOR_H\n#define FACIALEXPRESSIONSPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Features/Sampler/CylinderSampler.h>\n#include <Features/Calculator/CylinderOptimizedFeatureCalculator.h>\n#include <FacialExpressions/ferLocalFunctions.h>\n#include <pcl/point_types.h>\n#include <boost/signals2/signal.hpp>\n\nnamespace hpe\n{\n    class FacialExpressionProcessor: public IProcessor\n    {\n        public:\n            typedef boost::signals2::signal<void (std::vector<double>)> FacialExressionReadySignal;\n\n            FacialExpressionProcessor();\n            FacialExpressionProcessor(std::string dataFolder);\n            void Process(IDataStorage::Ptr dataStorage);\n            void Init();\n            void SubscribeFacialExpressionReadySignal(const FacialExressionReadySignal::slot_type &slot)\n            {\n                m_facialExpressionReady.connect(slot);\n            }\n        private:\n            typedef pcl::PointXYZRGBNormal PointT;\n            typedef pcl::PointXYZRGBNormal NormalT;\n\n            void Init(std::string);\n            std::shared_ptr<CylinderSampler> CreateCylindricalSampler();\n            void ThreadRoutine();\n\n            fer::paramList m_ferParameters;\n            std::vector<std::vector<fer::randomTree>> m_forests;\n            std::shared_ptr<hpe::CylinderOptimizedFeatureCalculator<PointT, NormalT>> m_featureCalculator;\n\n            bool m_workerBusy;\n            bool m_dataReady;\n            std::vector<double> m_ferResult;\n            Common<PointT>::Landmarks m_landmarks;\n            pcl::PointCloud<PointT> m_cloud;\n            int m_numberOfClasses;\n\n            FacialExressionReadySignal m_facialExpressionReady;\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/FilterProcessor.cpp",
    "content": "\n#include <Processor/FilterProcessor.h>\n#include <DataObject/CloudXYZRGBA.h>\n#include <DataObject/IDataStorage.h>\n\nnamespace hpe\n{\n    FilterProcessor::FilterProcessor(FilterPtr filter, std::string inputKey, std::string outputKey)\n        : m_filter(filter), m_inputKey(inputKey), m_outputKey(outputKey)\n    {\n\n    }\n\n    void FilterProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCastNotNull<CloudXYZRGBA>(m_inputKey);\n        CloudXYZRGBA::Ptr filteredCloudObject(new CloudXYZRGBA);\n        filteredCloudObject->cloud = m_filter->Filter(cloudObject->cloud);\n\n        dataStorage->Set(m_outputKey, filteredCloudObject);\n    }\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/FilterProcessor.h",
    "content": "#pragma once\n\n#ifndef FILTERPROCESSOR_H\n#define FILTERPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Filter/ICloudFilter.h>\n\nnamespace hpe\n{\n    class FilterProcessor : public IProcessor\n    {\n        public:\n            typedef std::shared_ptr<ICloudFilter<pcl::PointXYZRGBA>> FilterPtr;\n\n            FilterProcessor(FilterPtr filter, std::string inputKey, std::string outputKey);\n\n            void Process(IDataStorage::Ptr dataStorage);\n\n        private:\n            FilterPtr m_filter;\n            std::string m_inputKey;\n            std::string m_outputKey;\n    };\n\n}\n\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/HeadExtractionProcessor.cpp",
    "content": "#include \"HeadExtractionProcessor.h\"\n\n#include <UI/PointPicker.h>\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <DataObject/HeadPoseInfo.h>\n\n#include <Filter/BoxFilter.h>\n#include <Filter/Filters.h>\n#include <Common.h>\n\nnamespace hpe\n{\n\n    HeadExtractionProcessor::HeadExtractionProcessor(void)\n        : m_cloudKey(\"Cloud\"), m_filteredCloudKey(\"FilteredCloud\"), m_readyToExtract(false)\n    {\n    }\n\n    HeadExtractionProcessor::~HeadExtractionProcessor(void)\n    {\n    }\n\n    void HeadExtractionProcessor::Process(IDataStorage::Ptr storage)\n    {\n        CloudXYZRGBA::Ptr cloudObject = storage->GetAndCastNotNull<CloudXYZRGBA>(m_cloudKey);\n        HeadPoseInformation::Ptr headPoseInfo = storage->GetAndCast<HeadPoseInformation>(\"HeadPoseInfo\");\n\n        if (m_readyToExtract == false)\n        {\n            if (headPoseInfo.get() != nullptr)\n            {\n                std::shared_ptr<ICloudFilter<TPoint>> boxFilter(new BoxFilter<TPoint>(\n                                                       headPoseInfo->WorldX + 0.075f, headPoseInfo->WorldX - 0.075f,\n                                                       headPoseInfo->WorldY + 0.1f, headPoseInfo->WorldY - 0.106f,\n                                                       -1e6, 1e6));\n\n                m_filteringQueue.AddFilter(boxFilter);\n            }\n            else\n            {\n                Common<pcl::PointXYZRGBA>::Landmarks landmarks;\n                PointPicker<pcl::PointXYZRGBA> picker;\n                picker.SetCloud(cloudObject->cloud);\n                landmarks = picker.Pick(\"Selecte face boundaries\", 4, \"Select the four face boundaries in the following order:\\n\"\n                                        \"\\t1. Leftmost point\\n.\"\n                                        \"\\t2. Rightmost point\\n.\"\n                                        \"\\t3. Topmost point\\n.\"\n                                        \"\\t4. Bottommost point (chin)\\n\"\n                                        \"NOTE: Make sure you see the front part of the face. Try rotating the view.\");\n\n                float delta = 0.03f;\n                std::shared_ptr<ICloudFilter<TPoint>> boxFilter(new BoxFilter<TPoint>(\n                                                       landmarks[0].point.x + delta, landmarks[1].point.x - delta,\n                                                       landmarks[2].point.y + delta, landmarks[3].point.y - delta,\n                                                       -1e6, 1e6));\n\n                m_filteringQueue.AddFilter(boxFilter);\n            }\n\n            m_filteringQueue.AddFilterFunctor([](TCloud::Ptr & cloud) -> TCloud::Ptr\n            {\n                return Voxelize<TPoint>(cloud, 0.004f);\n            });\n\n            m_readyToExtract = true;\n        }\n\n        CloudXYZRGBA::Ptr filteredCloudObject(new CloudXYZRGBA);\n        filteredCloudObject->cloud = m_filteringQueue.Filter(cloudObject->cloud);\n\n        storage->Set(m_filteredCloudKey, filteredCloudObject);\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/HeadExtractionProcessor.h",
    "content": "#pragma once\n\n#ifndef HEADEXTRACTIONPROCESSOR_H\n#define HEADEXTRACTIONPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n//#include <Landmarks.h>\n#include <Filter/FilteringQueue.h>\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\nnamespace hpe\n{\n\n    class HeadExtractionProcessor : public IProcessor\n    {\n        public:\n            HeadExtractionProcessor(void);\n            ~HeadExtractionProcessor(void);\n\n            void Process(IDataStorage::Ptr storage) override;\n\n        private:\n            typedef pcl::PointXYZRGBA TPoint;\n            typedef pcl::PointCloud<TPoint> TCloud;\n\n            std::string m_cloudKey;\n            std::string m_filteredCloudKey;\n\n\n            bool m_readyToExtract;\n            FilteringQueue<TPoint> m_filteringQueue;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/IProcessor.h",
    "content": "#pragma once\n\n#ifndef IPROCESSOR_H\n#define IPROCESSOR_H\n\n#include <Interface/IInterface.h>\n#include <DataObject/IDataStorage.h>\n#include <memory>\n\nnamespace hpe\n{\n    class IProcessor : public IInterface\n    {\n        public:\n            typedef std::shared_ptr<IProcessor> Ptr;\n\n            virtual void Process(IDataStorage::Ptr dataStorage) = 0;\n            virtual void Init() {};\n            virtual void Cleanup() {};\n    };\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/SaveCloudProcessor.cpp",
    "content": "#include \"SaveCloudProcessor.h\"\n\n#include <pcl/io/pcd_io.h>\n#include <pcl/filters/conditional_removal.h>\n#include <boost/format.hpp>\n#include <boost/filesystem.hpp>\n#include <boost/date_time/posix_time/posix_time.hpp>\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <DataObject/RawFrames.h>\n\n#include <opencv/highgui.h>\n\nnamespace hpe\n{\n\n    SaveCloudProcessor::SaveCloudProcessor(void)\n        : m_frameNumber(0), m_dir(\"clouds\"), m_running(true)\n    {\n    }\n\n    SaveCloudProcessor::~SaveCloudProcessor(void)\n    {\n        m_videoWriter->release();\n    }\n\n    void SaveCloudProcessor::Init()\n    {\n        m_writingThread = boost::thread(boost::bind(&SaveCloudProcessor::SavingThreadRoutine, this));\n\n        namespace pt = boost::posix_time;\n        m_dir = pt::to_iso_string(pt::second_clock::local_time());\n        boost::filesystem::create_directory(m_dir);\n\n    }\n\n    void SaveCloudProcessor::ProcessEntry(Entry entry)\n    {\n        if (m_videoWriter.get() == nullptr)\n        {\n            m_videoWriter = std::shared_ptr<cv::VideoWriter>(new cv::VideoWriter);\n            m_videoWriter->open((boost::format(\"%1%/video.mpg\") % m_dir).str(), CV_FOURCC('M', 'P', 'E', 'G'), 30, cv::Size(640, 480));\n        }\n\n        std::vector<cv::Mat> channels;\n        cv::split(entry.frame, channels);\n        channels.erase(channels.begin() + 3);\n        cv::Mat colorWithoutAlpha;\n\n        cv::merge(channels, colorWithoutAlpha);\n        std::vector<int> indices;\n        pcl::removeNaNFromPointCloud(*(entry.cloud), *(entry.cloud), indices);\n\n        std::string savePathCloud = (boost::format(\"%1%/%2%.pcd\") % m_dir % m_frameNumber).str();\n        m_frameNumber += 1;\n\n        m_videoWriter->write(colorWithoutAlpha);\n        pcl::io::savePCDFileBinary(savePathCloud, *(entry.cloud));\n    }\n\n    void SaveCloudProcessor::SavingThreadRoutine()\n    {\n        while (m_running)\n        {\n            Entry entry;\n            m_mutex.lock();\n            if (m_entries.size() != 0)\n            {\n                entry = m_entries.front();\n                m_entries.pop();\n                m_mutex.unlock();\n\n                ProcessEntry(entry);\n            }\n            else\n            {\n                m_mutex.unlock();\n                boost::this_thread::sleep(boost::posix_time::milliseconds(100));\n            }\n        }\n\n        m_mutex.lock();\n\n        while (m_entries.size() != 0)\n        {\n            ProcessEntry(m_entries.front());\n            m_entries.pop();\n        }\n        m_videoWriter->release();\n\n        m_mutex.unlock();\n    }\n\n    void SaveCloudProcessor::Cleanup()\n    {\n        m_running = false;\n        m_writingThread.join();\n    }\n\n    void hpe::SaveCloudProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast<CloudXYZRGBA>(\"Cloud\");\n        RawFrames::Ptr frames = dataStorage->GetAndCast<RawFrames>(\"RawFrames\");\n        if (cloudObject.get() == nullptr || frames.get() == nullptr)\n        {\n            return;\n        }\n\n        Entry entry;\n        frames->colorFrame.copyTo(entry.frame);\n        pcl::copyPointCloud(*(cloudObject->cloud), *(entry.cloud));\n\n        m_mutex.lock();\n        m_entries.push(entry);\n        m_mutex.unlock();\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/SaveCloudProcessor.h",
    "content": "#pragma once\n\n#ifndef SAVECLOUDPROCESSOR_H\n#define SAVECLOUDPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <opencv2/opencv.hpp>\n\n#include <pcl/point_cloud.h>\n#include <pcl/point_types.h>\n\n#include <boost/thread/mutex.hpp>\n#include <boost/thread.hpp>\n#include <queue>\n\nnamespace hpe\n{\n\n    class SaveCloudProcessor : public IProcessor\n    {\n        public:\n            SaveCloudProcessor(void);\n            ~SaveCloudProcessor(void);\n\n            void Process(IDataStorage::Ptr dataStorage);\n            void Init();\n            void Cleanup();\n        private:\n            struct Entry\n            {\n                Entry() : cloud(new pcl::PointCloud<pcl::PointXYZRGBA>)\n                {}\n                cv::Mat frame;\n                pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud;\n            };\n\n            int m_frameNumber;\n            std::string m_dir;\n            std::shared_ptr<cv::VideoWriter> m_videoWriter;\n\n            boost::mutex m_mutex;\n            std::queue<Entry> m_entries;\n            boost::thread m_writingThread;\n            bool m_running;\n\n            void SavingThreadRoutine();\n            void ProcessEntry(Entry entry);\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/ShowCloudProcessor.cpp",
    "content": "#include \"ShowCloudProcessor.h\"\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <pcl/common/time.h>\n\nnamespace hpe\n{\n\n    ShowCloudProcessor::ShowCloudProcessor(void)\n        : m_key(\"Cloud\"), m_visualizer(\"Cloud\"), m_first(true)\n    {\n    }\n\n    ShowCloudProcessor::ShowCloudProcessor(std::string key)\n        : m_key(key), m_visualizer(key), m_first(true)\n    {\n    }\n\n    ShowCloudProcessor::~ShowCloudProcessor(void)\n    {\n    }\n\n    void ShowCloudProcessor::Process(IDataStorage::Ptr storage)\n    {\n        IDataObject::Ptr obj = storage->Get(m_key);\n        CloudXYZRGBA::Ptr cloudObject = std::dynamic_pointer_cast<CloudXYZRGBA>(obj);\n        if (cloudObject.get() == nullptr)\n        {\n            return;\n        }\n\n        if (m_first)\n        {\n            m_visualizer.registerKeyboardCallback(&ShowCloudProcessor::KeyboardEventCallback, this);\n            m_visualizer.addPointCloud(cloudObject->cloud, m_key);\n            m_first = false;\n        }\n        else\n        {\n            m_visualizer.updatePointCloud(cloudObject->cloud, m_key);\n        }\n\n        m_visualizer.spinOnce();\n    }\n\n    void ShowCloudProcessor::KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender)\n    {\n        ShowCloudProcessor *_this = static_cast<ShowCloudProcessor *>(sender);\n        _this->KeyPressed(event);\n    }\n\n    void ShowCloudProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event)\n    {\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/ShowCloudProcessor.h",
    "content": "#pragma once\n\n#ifndef SHOWCLOUDPROCESSOR_H\n#define SHOWCLOUDPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <pcl/visualization/pcl_visualizer.h>\n#include <pcl/visualization/keyboard_event.h>\n\nnamespace hpe\n{\n\n    class ShowCloudProcessor : public IProcessor\n    {\n        public:\n            ShowCloudProcessor(void);\n            ShowCloudProcessor(std::string key);\n            ~ShowCloudProcessor(void);\n\n            void Process(IDataStorage::Ptr storage);\n\n        protected:\n            virtual void KeyPressed(const pcl::visualization::KeyboardEvent &event);\n\n        private:\n            static void KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender);\n\n            std::string m_key;\n            pcl::visualization::PCLVisualizer m_visualizer;\n            bool m_first;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/TemplateCreatorProcessor.cpp",
    "content": "#include \"TemplateCreatorProcessor.h\"\n\n#include <DataObject/CloudXYZRGBA.h>\n\nnamespace hpe\n{\n\n    TemplateCreatorProcessor::TemplateCreatorProcessor(void)\n        : m_cloudKey(\"FilteredCloud\"), m_templateKey(\"Template\")\n    {\n    }\n\n    TemplateCreatorProcessor::~TemplateCreatorProcessor(void)\n    {\n    }\n\n    void TemplateCreatorProcessor::Init()\n    {\n\n    }\n\n    void TemplateCreatorProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast<CloudXYZRGBA>(m_cloudKey);\n\n        if (cloudObject.get() != nullptr)\n        {\n            CloudXYZRGBA::Ptr templateObject(new CloudXYZRGBA);\n            templateObject->cloud = m_templateCreator.AddCloudToTemplate(cloudObject->cloud);\n            dataStorage->Set(m_templateKey, templateObject);\n        }\n    }\n\n    IncrementalHeadTemplateCreator::CloudType::Ptr TemplateCreatorProcessor::GetTemplate()\n    {\n        return m_templateCreator.GetTemplate();\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/TemplateCreatorProcessor.h",
    "content": "#pragma once\n\n#ifndef TEMPLATECREATORPROCESSOR_H\n#define TEMPLATECREATORPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Align/IncrementalHeadTemplateCreator.h>\n\n\nnamespace hpe\n{\n    /**\n     \\class\tTemplateCreatorProcessor\n    \n     \\brief\tA head template is created here. The process uses m_cloudKey as an input,\n\t\t\tand iteratively registers consequtive point clouds by using IncrementalHeadTemplateCreator.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n     */\n\n    class TemplateCreatorProcessor : public IProcessor\n    {\n        public:\n            TemplateCreatorProcessor(void);\n            ~TemplateCreatorProcessor(void);\n\n            void Init() override;\n            void Process(IDataStorage::Ptr dataStorage) override;\n\n            IncrementalHeadTemplateCreator::CloudType::Ptr GetTemplate();\n\n        private:\n            std::string m_cloudKey;\n            std::string m_templateKey;\n\n            IncrementalHeadTemplateCreator m_templateCreator;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/TrackingProcessor.cpp",
    "content": "#include \"TrackingProcessor.h\"\n\n#include <pcl/common/common.h>\n#include <pcl/io/io.h>\n#include <pcl/search/kdtree.h>\n\n#include <boost/algorithm/string.hpp>\n\n#include <UI/PointPicker.h>\n#include <UI/ShowCloud.h>\n#include <Filter/Filters.h>\n#include <Landmarks.h>\n#include <Converter/KinectDataConverter.h>\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <DataObject/LandmarksObject.h>\n#include <DataObject/HeadPoseInfo.h>\n#include <DataObject/RawFrames.h>\n\n#include <Grabber/PCDGrabber.h>\n\n#include <Helpers/HpeHelpers.h>\n#include <boost/filesystem/operations.hpp>\n\nnamespace hpe\n{\n    TrackingProcessor::TrackingProcessor(Cloud::Ptr templateCloud)\n        : m_template(new Cloud), m_first(true), m_leftEyeIdx(-1), m_rightEyeIdx(-1), m_noseIdx(-1), m_saveLandmarks(false)\n    {\n        pcl::copyPointCloud(*templateCloud, *m_template);\n        m_template = Voxelize<PointType>(m_template, 0.0075);\n\n        CloudMapper::DynamicICPParams dynamicIcpParams;\n        dynamicIcpParams.push_back(CloudMapper::ICPParams(0.007, 0.005, 10e-6, 400, 10));\n        m_mapper.SetDynamicICPParams(dynamicIcpParams);\n        m_mapper.SetUsePointToPlaneMetric(true);\n\n        m_tracker = std::shared_ptr<ICPTemplateTracker<PointType>>(new ICPTemplateTracker<PointType>(m_template));\n    }\n\n    TrackingProcessor::~TrackingProcessor(void)\n    {\n    }\n\n    void TrackingProcessor::Process(IDataStorage::Ptr storage)\n    {\n        CloudXYZRGBA::Ptr cloudObject = storage->GetAndCastNotNull<CloudXYZRGBA>(\"Cloud\", \"TrackingProcessor::Process - Cloud is null\");\n        CloudXYZRGBA::Ptr updatedTemplateObject(new CloudXYZRGBA);\n        updatedTemplateObject->cloud = Cloud::Ptr(new Cloud);\n\n        HeadPoseInformation::Ptr info = storage->GetAndCast<HeadPoseInformation>(\"HeadPoseInfo\");\n        PCDGrabber::CloudFileInfo::Ptr fileinfo = storage->GetAndCast<PCDGrabber::CloudFileInfo>(\"FileInfo\");\n\n        if (m_first)\n        {\n            if (m_templateLandmarks.size() != 3)\n            {\n                m_templateLandmarks = GetLandmarks(m_template);\n            }\n            m_leftEyeIdx = m_templateLandmarks[1].index;\n            m_rightEyeIdx = m_templateLandmarks[0].index;\n            m_noseIdx = m_templateLandmarks[2].index;\n            if (info.get() == nullptr)\n            {\n                Common<PointType>::Landmarks frameLandmarks;\n                if (fileinfo.get() != nullptr)\n                {\n                    std::string landmarksFile = boost::replace_all_copy(fileinfo->Filename, \".pcd\", \".bnd\");\n                    if (boost::filesystem::exists(landmarksFile))\n                    {\n                        frameLandmarks = LoadLandmarks<pcl::PointXYZRGBA>(landmarksFile);\n                    }\n                }\n                if (frameLandmarks.size() != 3)\n                {\n                    frameLandmarks = GetLandmarks(cloudObject->cloud);\n                    std::string landmarksFile = boost::replace_all_copy(fileinfo->Filename, \".pcd\", \".bnd\");\n                    SaveLandmarks<pcl::PointXYZRGBA>(frameLandmarks, landmarksFile);\n                }\n                Eigen::Matrix4f transform = m_mapper.GetTransformHavingLandmarks(m_template, m_templateLandmarks, cloudObject->cloud, frameLandmarks);\n                pcl::transformPointCloud(*m_template, *(updatedTemplateObject->cloud), transform);\n                m_tracker->RoughFirstTimeDetection(cloudObject->cloud, transform);\n            }\n            else\n            {\n                Eigen::Vector3f zVector = GetNormal<PointType>(m_template, m_noseIdx);\n                if (zVector(2) < 0)\n                {\n                    zVector = -zVector;\n                }\n\n                RawFrames::Ptr frames = storage->GetAndCastNotNull<RawFrames>(\"RawFrames\", \"TrackingProcessor::Process - RawFrames is null\");\n\n                KinectDataConverter converter;\n\n                float depthValue = 0;\n                int squareSize = 0;\n                while (depthValue == 0)\n                {\n                    cv::Mat square = frames->depthFrame\n                                     .rowRange(info->DepthFrameY - squareSize, info->DepthFrameY + squareSize + 1)\n                                     .colRange(info->DepthFrameX - squareSize, info->DepthFrameX + squareSize + 1);\n                    double min, max;\n                    cv::minMaxIdx(square, &min, &max);\n                    depthValue = max;\n                    squareSize += 1;\n                }\n                pcl::PointXYZ pt = converter.ConvertOnePoint(info->DepthFrameX, info->DepthFrameY, depthValue, cv::Size(frames->depthFrame.cols, frames->depthFrame.rows));\n\n                Eigen::Vector3f orientation = pt.getVector3fMap();\n                orientation /= orientation.norm();\n\n                Eigen::Quaternionf rotation;\n                rotation.setFromTwoVectors(zVector, orientation);\n                Eigen::Matrix4f transformRotation = Eigen::Matrix4f::Identity();\n                transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix();\n                pcl::transformPointCloud(*m_template, *(updatedTemplateObject->cloud), transformRotation);\n\n                pcl::PointXYZRGBA nose = updatedTemplateObject->cloud->points[m_templateLandmarks[2].index];\n\n                Eigen::Matrix4f transformTranslation = Eigen::Matrix4f::Identity();\n                transformTranslation.block<3, 1>(0, 3) = pt.getVector3fMap() - nose.getVector3fMap();\n\n                pcl::transformPointCloud(*(updatedTemplateObject->cloud), *(updatedTemplateObject->cloud), transformTranslation);\n\n                Eigen::Matrix4f transform = transformRotation * transformTranslation;\n\n                Eigen::Matrix4f identity = Eigen::Matrix4f::Identity();\n                m_tracker->SetTemplate(updatedTemplateObject->cloud);\n                m_tracker->RoughFirstTimeDetection(cloudObject->cloud, identity);\n                updatedTemplateObject->cloud = m_tracker->Update(cloudObject->cloud);\n            }\n            m_first = false;\n        }\n        else\n        {\n            updatedTemplateObject->cloud = m_tracker->Update(cloudObject->cloud);\n        }\n\n        Common<PointType>::Landmarks landmarks = ResampleLandmarks<PointType>(m_templateLandmarks, updatedTemplateObject->cloud);\n        //landmarks.push_back(Common<PointType>::Landmark(m_leftEyeIdx, updatedTemplateObject->cloud->points[m_leftEyeIdx]));\n        //landmarks.push_back(Common<PointType>::Landmark(m_rightEyeIdx, updatedTemplateObject->cloud->points[m_rightEyeIdx]));\n        //landmarks.push_back(Common<PointType>::Landmark(m_noseIdx, updatedTemplateObject->cloud->points[m_noseIdx]));\n        PointType pointOnNormal = updatedTemplateObject->cloud->points[m_noseIdx];\n        pointOnNormal.getArray3fMap() = pointOnNormal.getVector3fMap() + GetNormal<PointType>(updatedTemplateObject->cloud, m_noseIdx);\n        landmarks.push_back(Common<PointType>::Landmark(-1, pointOnNormal));\n\n        if (fileinfo.get() != nullptr)\n        {\n            if (m_saveLandmarks)\n            {\n                std::string eyespath = boost::replace_all_copy(fileinfo->Filename, \".pcd\", \".bnd\");\n                std::cout << eyespath << std::endl;\n                SaveLandmarks<PointType>(landmarks, eyespath);\n            }\n        }\n\n        LandmarksObject<PointType>::Ptr landmarksObject(new LandmarksObject<PointType>);\n        landmarksObject->landmarks = landmarks;\n        storage->Set(\"Landmarks\", landmarksObject);\n        storage->Set(\"UpdatedTemplate\", updatedTemplateObject);\n\n        Eigen::Vector3f headPose = VectorToEulerAngles(GetNormal<PointType>(updatedTemplateObject->cloud, m_noseIdx));\n        m_headPoseReadySignal(headPose);\n    }\n\n    Common<TrackingProcessor::PointType>::Landmarks TrackingProcessor::GetLandmarks(Cloud::Ptr cloud)\n    {\n        PointPicker<PointType> picker;\n        picker.SetCloud(cloud);\n        return picker.Pick(\"3 points\", 3);\n    }\n\n    void TrackingProcessor::SetTemplateLandmarksFromFile(std::string landmarksFile)\n    {\n        Common<pcl::PointXYZRGBA>::Landmarks l;\n        if (boost::filesystem::exists(landmarksFile))\n        {\n            l = LoadLandmarks<pcl::PointXYZRGBA>(landmarksFile);\n        }\n        else\n        {\n            PointPicker<pcl::PointXYZRGBA> picker;\n            picker.SetCloud(m_template);\n            l = picker.Pick(\"eyes and nose\", 3);\n            SaveLandmarks<pcl::PointXYZRGBA>(l, landmarksFile);\n        }\n        m_templateLandmarks = l;\n    }\n\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Processor/TrackingProcessor.h",
    "content": "#pragma once\n\n#ifndef TRACKINGPROCESSOR_H\n#define TRACKINGPROCESSOR_H\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\n#include <Processor/IProcessor.h>\n#include <Align/CloudMapper.h>\n#include <Tracker/ICPTemplateTracker.h>\n#include <Common.h>\n#include <Landmarks.h>\n\n#include <boost/signals2/signal.hpp>\n\nnamespace hpe\n{\n\n    class TrackingProcessor : public IProcessor\n    {\n        public:\n            typedef pcl::PointXYZRGBA PointType;\n            typedef pcl::PointCloud<PointType> Cloud;\n\n            TrackingProcessor(Cloud::Ptr templateCloud);\n            ~TrackingProcessor(void);\n\n            void Process(IDataStorage::Ptr storage);\n\n            typedef boost::signals2::signal<void (Eigen::Vector3f)> HeadPoseReadySignal;\n            void SubscribeHeadposeReadySignal(const HeadPoseReadySignal::slot_type &slot)\n            {\n                m_headPoseReadySignal.connect(slot);\n            }\n\n            void SetTemplateLandmarks(Common<pcl::PointXYZRGBA>::Landmarks &landmarks)\n            {\n                m_templateLandmarks = ResampleLandmarksByPosition<PointType>(landmarks, m_template);\n            }\n\n            void SetSaveLandmakrs(bool value)\n            {\n                m_saveLandmarks = value;\n            }\n\n            void ResetTracker()\n            {\n                m_tracker->ResetTracking();\n                m_first = true;\n            }\n\n            void SetTemplateLandmarksFromFile(std::string landmarksFile);\n\n        private:\n            Common<PointType>::Landmarks GetLandmarks(Cloud::Ptr cloud);\n\n            int m_leftEyeIdx;\n            int m_rightEyeIdx;\n            int m_noseIdx;\n\n            bool m_first;\n            Cloud::Ptr m_template;\n            CloudMapper m_mapper;\n            std::shared_ptr<ICPTemplateTracker<PointType>> m_tracker;\n\n            Common<pcl::PointXYZRGBA>::Landmarks m_templateLandmarks;\n\n            bool m_saveLandmarks;\n\n            HeadPoseReadySignal m_headPoseReadySignal;\n    };\n\n}\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Provider/IDataProvider.h",
    "content": "#ifndef IDATAPROVIDER_H\n#define IDATAPROVIDER_H\n\n#include <Interface/IInterface.h>\n#include <Common.h>\n\n#include <opencv2/opencv.hpp>\n#include <pcl/point_cloud.h>\n\n\nnamespace hpe\n{\n    /**\n     \\class\tIDataProvider\n    \n     \\brief\tAn interface that provides access to RGB-D pairs, point clouds, and \n\t\t\tlandmarks if available. Can be used to imitate grabbing from a sensor.\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n    \n     \\tparam\tPointType\tType of the point type.\n     */\n\n    template <typename PointType>\n    class IDataProvider : public IInterface\n    {\n        public:\n            virtual int GetFrameCount() = 0;\n            virtual cv::Mat GetDepth(int frameNumber) = 0;\n            virtual cv::Mat GetBgr(int frameNumber) = 0;\n            virtual typename pcl::PointCloud<PointType>::Ptr GetCloud(int frameNumber) = 0;\n            virtual typename Common<PointType>::Landmarks GetPoints(int frameNumber) = 0;\n    };\n}\n\n\n\n#endif // IDATAPROVIDER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Provider/RawDataProvider.cpp",
    "content": "#include \"RawDataProvider.h\"\n\n#include <Provider/CvMatRawSerializer.h>\n#include <Converter/KinectDataConverter.h>\n#include <boost/filesystem.hpp>\n#include <pcl/io/pcd_io.h>\n\nnamespace fs = boost::filesystem;\n\nnamespace hpe\n{\n    RawDataProvider::RawDataProvider()\n        : m_initialized(false)\n    {\n    }\n\n    cv::Mat RawDataProvider::GetDepth(int frameNumber)\n    {\n        CvMatRawSerializer reader;\n        cv::Mat depthImage;\n        reader.Load(m_depthImageFiles[frameNumber], depthImage);\n        return depthImage;\n    }\n\n    cv::Mat RawDataProvider::GetBgr(int frameNumber)\n    {\n        return cv::imread(m_imageFiles[frameNumber]);\n    }\n\n    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr RawDataProvider::GetCloud(int frameNumber)\n    {\n        CvMatRawSerializer reader;\n        cv::Mat depthImage;\n        reader.Load(m_depthImageFiles[frameNumber], depthImage);\n\n        cv::Mat image = cv::imread(m_imageFiles[frameNumber]);\n\n        KinectDataConverter converter;\n        return converter.DepthRGBToPointCloud(depthImage, image);\n    }\n\n    RawDataProvider::Landmarks RawDataProvider::GetPoints(int frameNumber)\n    {\n        RawDataProvider::Landmarks landmarks;\n\n        if (frameNumber < m_pointsFiles.size() || m_pointsFiles.empty())\n        {\n            return landmarks;\n        }\n\n        std::string landmarkFile = m_pointsFiles[frameNumber];\n        std::ifstream stream(landmarkFile.c_str());\n        std::string line;\n        while (std::getline(stream, line))\n        {\n            boost::tokenizer<boost::escaped_list_separator<char>> tokenizer(line);\n            std::vector<std::string> tokens;\n            tokens.assign(tokenizer.begin(), tokenizer.end());\n\n            RawDataProvider::Landmarks::value_type landmark;\n            std::stringstream sstream(tokens[0]);\n            int x, y;\n            sstream >> x;\n            sstream.str(tokens[1]);\n            sstream.clear();\n            sstream >> y;\n\n            auto cloud = GetCloud(frameNumber);\n\n            landmark.index = (int)x + cloud->width * (int)y;\n            landmark.point = cloud->at(landmark.index);\n            landmarks.push_back(landmark);\n        }\n\n        return landmarks;\n    }\n\n    int RawDataProvider::GetFrameCount()\n    {\n        int m1 = std::min(m_depthImageFiles.size(), m_imageFiles.size());\n        return m1;\n        return std::min(m1, m_numberOfFrames);\n    }\n\n    int RawDataProvider::GetFrequency()\n    {\n        return m_frequency;\n    }\n\n    void RawDataProvider::CheckInitialized()\n    {\n        if (m_initialized == false)\n        {\n\n        }\n    }\n\n    void RawDataProvider::SetFolder(const std::string &path)\n    {\n        m_pointCloudFolder = path;\n\n        if (fs::exists(m_pointCloudFolder) == false)\n        {\n            throw std::invalid_argument(\"DataProvider::SetFolder: pointCloudFolder does not exist\");\n        }\n\n        std::string videoPath = (fs::path(m_pointCloudFolder) / \"video.mpg\").string();\n\n        if (fs::exists(videoPath) == false)\n        {\n            //throw std::invalid_argument(\"DataProvider::SetFolder: video does not exist\");\n        }\n\n        m_videoCapture = std::shared_ptr<cv::VideoCapture>(new cv::VideoCapture);\n        m_videoCapture->open(videoPath);\n\n        EnumerateFiles();\n\n        m_initialized = true;\n    }\n\n    void RawDataProvider::EnumerateFiles()\n    {\n        fs::path rootFolder(m_pointCloudFolder);\n\n        if (fs::exists(rootFolder))\n        {\n            if (fs::is_directory(rootFolder))\n            {\n                auto it = fs::directory_iterator(rootFolder);\n                for (; it != fs::directory_iterator(); it++)\n                {\n                    auto fileName = *it;\n                    if (fs::is_regular(fileName))\n                    {\n                        auto path = fileName.path();\n                        fs::path extension = path.extension();\n                        auto filename = path.filename().string();\n                        if (filename == \"shimmeroutput.csv\")\n                        {\n                            continue;\n                        }\n                        std::string p = path.string();\n                        if (extension == \".png\")\n                        {\n                            m_imageFiles.push_back(p);\n                        }\n                        else if (extension == \".raw\")\n                        {\n                            m_depthImageFiles.push_back(p);\n                        }\n                        else if (extension == \".csv\")\n                        {\n                            m_pointsFiles.push_back(p);\n                        }\n                    }\n                }\n            }\n        }\n\n        std::sort(m_imageFiles.begin(), m_imageFiles.end());\n        std::sort(m_depthImageFiles.begin(), m_depthImageFiles.end());\n        std::sort(m_pointsFiles.begin(), m_pointsFiles.end());\n    }\n}\n\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Provider/RawDataProvider.h",
    "content": "#ifndef RAWDATAPROVIDER_H\n#define RAWDATAPROVIDER_H\n\n#include <vector>\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <opencv2/opencv.hpp>\n#include \"Provider/IDataProvider.h\"\n#include \"Common.h\"\n\nnamespace hpe\n{\n    class RawDataProvider : public IDataProvider<pcl::PointXYZRGBA>\n    {\n        public:\n            typedef Common<pcl::PointXYZRGBA>::Landmarks Landmarks;\n\n            RawDataProvider();\n\n            void SetFolder(const std::string &path);\n\n            cv::Mat GetDepth(int frameNumber);\n            cv::Mat GetBgr(int frameNumber);\n            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr GetCloud(int frameNumber);\n            Landmarks GetPoints(int frameNumber);\n\n            int GetFrameCount();\n            int GetFrequency();\n\n        private:\n            void CheckInitialized();\n            void EnumerateFiles();\n\n            std::vector<std::string> m_imageFiles;\n            std::vector<std::string> m_depthImageFiles;\n            std::vector<std::string> m_pointsFiles;\n\n\n            std::string m_pointCloudFolder;\n\n            int m_numberOfFrames;\n            int m_frequency;\n\n            bool m_initialized;\n\n            std::shared_ptr<cv::VideoCapture> m_videoCapture;\n    };\n}\n\n#endif // DATAPROVIDER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Settings/HPESettings.cpp",
    "content": "#include \"HPESettings.h\"\n\n#include <Exception/HPEException.h>\n\n#include <boost/property_tree/ini_parser.hpp>\n#include <boost/filesystem.hpp>\n#include <boost/format.hpp>\n\nnamespace hpe\n{\n    HPESettings::HPESettings(void)\n    {\n        Load(\"settings.ini\");\n    }\n\n    HPESettings::HPESettings(std::string file)\n    {\n        Load(file);\n    }\n\n    HPESettings::HPESettings(bool load, std::string file)\n    {\n        if (load)\n        {\n            Load(file);\n        }\n    }\n\n    HPESettings::~HPESettings(void)\n    {\n    }\n\n    void HPESettings::Load(std::string file)\n    {\n        if (boost::filesystem::exists(file))\n        {\n            boost::property_tree::ini_parser::read_ini(file, m_propertyTree);\n        }\n        else\n        {\n            throw HPEException(\"Settings: file not found\");\n        }\n    }\n\n    std::string HPESettings::GetString(std::string key)\n    {\n        auto child = m_propertyTree.get_child_optional(key);\n        if (!child)\n        {\n\t\t\tstd::string message = (boost::format(\"Settings: key %1% not found\") % key).str();\n            throw HPEException(message);\n        }\n        return m_propertyTree.get<std::string>(key);\n    }\n\n\n    std::vector<std::string> HPESettings::ReadList(std::string listFile)\n    {\n        if (boost::filesystem::exists(listFile) == false)\n        {\n            throw HPEException(\"std::vector<std::string> HPESettings::ReadList(std::string listFile) : listFile doesn't exist\");\n        }\n\n        std::vector<std::string> result;\n        std::ifstream stream(listFile);\n        std::string line;\n        while (std::getline(stream, line))\n        {\n            result.push_back(line);\n        }\n\n        return result;\n    }\n\n\tstd::vector<std::string> HPESettings::ReadStringList(std::string key)\n\t{\n\t\treturn ReadList(GetString(key));\n\t}\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Settings/HPESettings.h",
    "content": "#pragma once\n\n#ifndef HPESETTINGS_H\n#define HPESETTINGS_H\n\n#include <boost/property_tree/ptree.hpp>\n#include <boost/lexical_cast.hpp>\n#include <boost/algorithm/string.hpp>\n#include <vector>\n#include <iostream>\n\nnamespace hpe\n{\n    /**\n     \\class\tHPESettings\n    \n     \\brief\tThis class is used to read *.ini files.\n     */\n\n    class HPESettings\n    {\n        public:\n            HPESettings(void);\n            HPESettings(std::string file);\n\n            HPESettings(bool load, std::string file = \"settings.ini\");\n            ~HPESettings(void);\n\n            void Load(std::string file);\n\n            std::string GetString(std::string key);\n\n            /**\n             \\fn\ttemplate <typename T> T HPESettings::GetValue(std::string key)\n            \n             \\brief\tReturn a value from ini file and convert it to T.\n            \n             \\author\tSergey\n             \\date\t8/11/2015\n            \n             \\tparam\tT\ttype of the returned value.\n             \\param\tkey\t\tThe key in the ini file.\n            \n             \\return\tThe value.\n             */\n\n            template <typename T>\n            T GetValue(std::string key)\n            {\n                std::string value = GetString(key);\n                return boost::lexical_cast<T>(value);\n            }\n\n            /**\n             \\fn\ttemplate <typename T> std::vector<T> HPESettings::GetVector(std::string key)\n            \n             \\brief\tSimilar to GetValue(...), but to return a vector of T.\t\n\t\t\t\t\tIn an ini an int vector should be stored as 1,2,3,4,5,6 (comma separated)\n            \n             \\tparam\tT\tAll vector values will be cast to T.\n             \\param\tkey\tThe key.\n            \n             \\return\tThe vector of T.\n             */\n\n            template <typename T>\n            std::vector<T> GetVector(std::string key)\n            {\n                std::string value = GetString(key);\n                std::vector<std::string> strs;\n                boost::split(strs, value, boost::is_any_of(\",\"));\n                std::vector<T> result;\n                for (int i = 0; i < strs.size(); i++)\n                {\n                    try\n                    {\n                        T v = boost::lexical_cast<T>(strs[i]);\n                        result.push_back(v);\n                    }\n                    catch (boost::bad_lexical_cast &ex)\n                    {\n                        std::cout << ex.what() << std::endl;\n                    }\n                }\n\n                return result;\n            }\n\n            static std::vector<std::string> ReadList(std::string listFile);\n\t\t\tstd::vector<std::string> ReadStringList(std::string key);\n\n        private:\n            boost::property_tree::ptree m_propertyTree;\n    };\n}\n\n#endif\n\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/Tracker/ICPTemplateTracker.h",
    "content": "#ifndef ICPTEMPLATETRACKER_H\n#define ICPTEMPLATETRACKER_H\n\n#include \"Common.h\"\n#include \"Align/CloudMapper.h\"\n#include <DataFilter/IDataFilter.h>\n\n#include <memory.h>\n#include <pcl/point_cloud.h>\n#include <pcl/common/common.h>\n#include <pcl/common/transforms.h>\n#include <pcl/common/io.h>\n#include <pcl/common/time.h>\n#include <pcl/search/kdtree.h>\n#include <pcl/visualization/cloud_viewer.h>\n\nnamespace hpe\n{\n//FIXME Now works only with pcl::PointXYZRGBA because CloudMapper is not a template\n    template <typename TPoint>\n    class ICPTemplateTracker\n    {\n        public:\n            typedef pcl::PointCloud<TPoint> TCloud;\n\n            ICPTemplateTracker(typename TCloud::Ptr &templateCloud)\n                : m_templateCloud(templateCloud), m_canPredict(false), m_transformationFilter(nullptr)\n            {\n                m_mapper = std::shared_ptr<CloudMapper>(new CloudMapper());\n                m_mapper->SetUseNormalShooting(false);\n                m_mapper->SetUsePointToPlaneMetric(true);\n\n                m_icpParams.push_back(CloudMapper::ICPParams(0.005, 0.005, 10e-6, 2000, 5));\n                m_mapper->SetDynamicICPParams(m_icpParams);\n                m_mapper->SetUsePointToPlaneMetric(true);\n                m_followingTemplate = typename TCloud::Ptr(new TCloud);\n            }\n\n            void RoughFirstTimeDetection(typename TCloud::Ptr &frame, Eigen::Matrix4f &transformation)\n            {\n                pcl::transformPointCloud(*m_templateCloud, *m_followingTemplate, transformation);\n                m_canPredict = true;\n            }\n\n            void SetTemplate(typename TCloud::Ptr &templateCloud)\n            {\n                pcl::copyPointCloud(*templateCloud, *m_templateCloud);\n                m_canPredict = false;\n            }\n\n            void GetRelevantPoints(typename TCloud::Ptr &newFrame, std::vector<int> &points, float distance)\n            {\n                pcl::search::KdTree<TPoint> search;\n                search.setInputCloud(newFrame);\n                for (int i = 0; i < m_followingTemplate->size() ; i += 1)\n                {\n                    static const int neighbours = 1;\n                    std::vector<int> pointIdx(neighbours);\n                    std::vector<float> pointDistance(neighbours);\n                    if (search.nearestKSearch(m_followingTemplate->points[i], neighbours, pointIdx, pointDistance))\n                    {\n                        if (pointDistance[0] < distance)\n                        {\n                            points.push_back(i);\n                        }\n                    }\n                }\n            }\n\n            typename TCloud::Ptr Update(typename TCloud::Ptr &newFrame)\n            {\n                m_followingTransform = m_mapper->GetTransformForTwoCloudsDynamically(m_followingTemplate, newFrame, m_icpParams);\n\n                if (m_transformationFilter != nullptr)\n                {\n                    m_followingTransform = m_transformationFilter->Filter(m_followingTransform);\n                }\n\n                typename TCloud::Ptr updatedCloud(new TCloud);\n                pcl::transformPointCloud(*m_followingTemplate, *updatedCloud, m_followingTransform);\n                pcl::copyPointCloud(*updatedCloud, *m_followingTemplate);\n\n                std::vector<int> pointIndexes;\n                GetRelevantPoints(newFrame, pointIndexes, 5e-6);\n                if (m_pointHistory.size() >= HistorySize)\n                {\n                    m_pointHistory.pop_front();\n                }\n                m_pointHistory.push_back(pointIndexes);\n                std::vector<double> weights = WeightsFromPointHistory(m_pointHistory, m_followingTemplate->size());\n                m_mapper->SetWeights(weights);\n\n                return updatedCloud;\n            }\n\n            Eigen::Matrix4f GetCurrentTransform()\n            {\n                return m_followingTransform;\n            }\n\n            void SetTransformationFilter(std::shared_ptr<DataFilter<Eigen::Matrix4f>> &dataFilter)\n            {\n                m_transformationFilter(dataFilter);\n            }\n\n            bool CanPredict()\n            {\n                return m_canPredict;\n            }\n\n            void ResetTracking()\n            {\n                m_canPredict = false;\n            }\n\n        private:\n            std::vector<double> WeightsUniform(int totalPoints, double weight = 1)\n            {\n                std::vector<double> weights;\n                weights.reserve(totalPoints);\n                for (int i = 0; i < totalPoints; i += 1)\n                {\n                    weights[i] = weight;\n                }\n                return weights;\n            }\n\n            std::vector<double> WeightsFromPointHistory(PointHistory &history, int totalPoints)\n            {\n                std::vector<double> weights(totalPoints);\n                for (auto indexes = history.begin(); indexes != history.end(); indexes++)\n                {\n                    for (auto index = indexes->begin(); index != indexes->end(); index++)\n                    {\n                        weights[*index] += 1;\n                    }\n                }\n                for (int i = 0; i < weights.size(); i += 1)\n                {\n                    weights[i] = 0.3 + 0.4 * weights[i];\n                }\n                return weights;\n            }\n\n            std::vector<double> WeightsGaussian(typename TCloud::Ptr &cloud)\n            {\n                TPoint &noseTip = cloud->points[0];\n\n                std::vector<double> weights(cloud->points.size());\n                const double sigma = 0.1;\n                for (int i = 0; i < cloud->points.size(); i += 1)\n                {\n                    TPoint &pt = cloud->points[i];\n                    double dist = (pt.x - noseTip.x) * (pt.x - noseTip.x) + (pt.y - noseTip.y) * (pt.y - noseTip.y) + (pt.z - noseTip.z) * (pt.z - noseTip.z);\n                    weights[i] = 0.3 + std::exp(- dist / 2 / sigma / sigma);\n                }\n                return weights;\n            }\n\n            std::vector<double> WeightsGaussianAndHistory(PointHistory &history, typename TCloud::Ptr &cloud)\n            {\n                std::vector<double> weightsFromHistory = WeightsFromPointHistory(history, cloud->size());\n                std::vector<double> weightsFromGaussian = WeightsGaussian(cloud);\n                std::vector<double> weights(cloud->size());\n                for (int i = 0; i < cloud->size(); i += 1)\n                {\n                    weights[i] = weightsFromGaussian[i] + weightsFromHistory[i];\n                }\n                return weights;\n            }\n\n            bool m_canPredict;\n            static const int HistorySize = 5;\n\n            typename TCloud::Ptr m_templateCloud;\n            typename TCloud::Ptr m_followingTemplate;\n\n            std::shared_ptr<CloudMapper> m_mapper;\n            PointHistory m_pointHistory;\n\n            Eigen::Matrix4f m_followingTransform;\n\n            CloudMapper::DynamicICPParams m_icpParams;\n\n            std::shared_ptr<DataFilter<Eigen::Matrix4f>> m_transformationFilter;\n\n    };\n}\n\n#endif // ICPTEMPLATETRACKER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/UI/PointPicker.h",
    "content": "#pragma once\n\n#ifndef POINTPICKER_H\n#define POINTPICKER_H\n\n#include <vector>\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n\n#include <pcl/visualization/pcl_visualizer.h>\n#include <pcl/visualization/point_picking_event.h>\n#include <pcl/visualization/keyboard_event.h>\n\n#include <pcl/search/kdtree.h>\n\n#include <boost/format.hpp>\n\n#include <Exception/HPEException.h>\n\n#include <Common.h>\n\nnamespace hpe\n{\n    /**\n     \\class\tPointPicker\n    \n     \\brief\tA component that shows a window asking to manually pick a set of points.\n\t\t\t\n\t\t\tTo pick a point press SHIFT key and left click on the point, the point index will be \n\t\t\tdisplayed.\n\t\t\t\n\t\t\tTo delete the last point press DELETE.\n\n\t\t\tWhen done press Space\n    \n     \\author\tSergey\n     \\date\t8/11/2015\n    \n     \\tparam\tPointType\tType of the point type.\n     */\n\n    template <typename PointType>\n    class PointPicker\n    {\n        public:\n            typedef typename Common<PointType>::Landmark PickedPoint;\n            typedef typename Common<PointType>::Landmarks PickedPoints;\n\n            PointPicker(bool showColor = true, bool allowLessKeypoints = false)\n                : m_cloudIsSet(false), m_done(false), m_sphereNumber(0),\n                  m_warningId(\"warning text\"), m_pointsListId(\"m_pointsListId\"),\n                  m_selectedPointsCloudId(\"m_selectedPointsCloudId\"),\n                  m_addedPointList(false), m_addedCloud(false),\n                  m_showColor(showColor), m_allowLessKeypoints(allowLessKeypoints)\n            {\n\n            }\n\n            PickedPoints Pick(std::string message, int numberOfPoints, std::string extendedMessage=\"\")\n            {\n                if (m_cloudIsSet == false)\n                {\n                    throw HPEException(\"void Pick(std::string &message, int numberOfPoints): Cloud was not set\");\n                }\n\n                m_numberOfPointsToPick = numberOfPoints;\n\n                pcl::visualization::PCLVisualizer viewer(message);\n                m_visualizer = &viewer;\n\n                viewer.registerPointPickingCallback(&PointPicker::PointPickingCallback, this);\n                viewer.registerKeyboardCallback(&PointPicker::KeyboardCallback, this);\n\n                viewer.addPointCloud(m_cloud, \"cloud\");\n\n                if (m_showColor == false)\n                {\n                    viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 1, \"cloud\");\n                }\n\n                viewer.addText(\"Press:\\n SHIFT + Left Click to select\\n DELETE to clear\\n BACKSPACE to delete last point\\n SPACE to finish\", 10, 50, 11, 1, 1, 1, \"instructions\");\n                viewer.addText(extendedMessage, 200, 50, 11, 1, 1, 1, \"extendedMessage\");\n\n                while (viewer.wasStopped() == false && m_done == false)\n                {\n                    viewer.spinOnce(100);\n                    boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n                }\n\n                std::vector<pcl::visualization::Camera> cameras;\n                viewer.getCameras(cameras);\n                if (cameras.size() > 0)\n                {\n                    m_camera = std::shared_ptr<pcl::visualization::Camera>(new pcl::visualization::Camera);\n                    *m_camera = cameras[0];\n                }\n\n                m_visualizer->close();\n\n                return m_pickedPoints;\n            }\n\n            void SetCloud(const typename pcl::PointCloud<PointType>::Ptr &cloud)\n            {\n                m_cloud = cloud;\n                m_search.setInputCloud(cloud);\n                m_cloudIsSet = true;\n                m_distanceBetweenPoints = std::abs(cloud->at(0).y - cloud->at(1).y);\n                ClearPicker();\n            }\n\n            std::shared_ptr<pcl::visualization::Camera> GetCamera() {\n                return m_camera;\n            }\n\n        private:\n            std::shared_ptr<pcl::visualization::Camera> m_camera;\n\n            bool m_showColor;\n            bool m_cloudIsSet;\n            bool m_done;\n            bool m_addedCloud;\n            bool m_addedPointList;\n            bool m_allowLessKeypoints;\n            int m_numberOfPointsToPick;\n            float m_distanceBetweenPoints;\n\n            std::string m_warningId;\n            std::string m_pointsListId;\n            std::string m_selectedPointsCloudId;\n\n            pcl::visualization::PCLVisualizer *m_visualizer;\n\n            typename pcl::PointCloud<PointType>::Ptr m_cloud;\n            typename pcl::search::KdTree<PointType> m_search;\n\n            PickedPoints m_pickedPoints;\n\n            int m_sphereNumber;\n\n            void ShowSelectedPoints()\n            {\n                if (m_addedCloud)\n                {\n                    m_visualizer->removePointCloud(m_selectedPointsCloudId);\n                    m_addedCloud = false;\n                }\n\n                if (m_pickedPoints.empty())\n                {\n                    return;\n                }\n\n                pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGB>);\n                for (int i = 0; i < m_pickedPoints.size(); i++)\n                {\n                    auto point = m_pickedPoints[i].point;\n                    pcl::PointXYZRGB p;\n                    p.x = point.x;\n                    p.z = point.z;\n                    p.y = point.y;\n                    p.r = 255;\n                    p.g = 255;\n                    p.b = 255;\n                    cloud->points.push_back(p);\n                }\n\n                m_visualizer->addPointCloud(cloud, m_selectedPointsCloudId);\n                m_visualizer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, m_selectedPointsCloudId);\n                m_addedCloud = true;\n            }\n\n            void RemoveLastPoint()\n            {\n                if (m_pickedPoints.empty())\n                {\n                    return;\n                }\n\n                m_pickedPoints.pop_back();\n            }\n\n            void PrintPointsList()\n            {\n                if (m_addedPointList)\n                {\n                    m_visualizer->removeShape(m_pointsListId);\n                    m_addedPointList = false;\n                }\n\n                if (m_pickedPoints.empty())\n                {\n                    return;\n                }\n\n                std::stringstream stream;\n                for (int i = 0; i < m_pickedPoints.size(); i++)\n                {\n                    auto p = m_pickedPoints[i].point;\n                    stream << boost::format(\"%1% - %5% - [%2%  %3%  %4%]\") % (i + 1) % p.x % p.y % p.z % m_pickedPoints[i].index << std::endl;\n                }\n\n                m_addedPointList = true;\n                auto text = stream.str();\n                m_visualizer->addText(text, 10, 150, 10, 1, 1, 1, m_pointsListId);\n            }\n\n            void ClearPicker()\n            {\n                m_pickedPoints.clear();\n                m_done = false;\n            }\n\n            void ShowWarning()\n            {\n                std::string warning = (boost::format(\"You need to pick exactly %1% points\") % m_numberOfPointsToPick).str();\n                m_visualizer->addText(warning, 10, 100, 10, 1, 0, 0, m_warningId);\n            }\n\n            bool CheckNumberOfPoints()\n            {\n                if (m_pickedPoints.size() >= m_numberOfPointsToPick)\n                {\n                    ShowWarning();\n                    return false;\n                }\n                return true;\n            }\n\n            bool CheckEnoughPointsSelected()\n            {\n                if (m_allowLessKeypoints)\n                {\n                    return true;\n                }\n                if (m_pickedPoints.size() == m_numberOfPointsToPick)\n                {\n                    return true;\n                }\n                ShowWarning();\n                return false;\n            }\n\n            void ClearWarning()\n            {\n                m_visualizer->removeShape(m_warningId);\n            }\n\n            static void PointPickingCallback(const pcl::visualization::PointPickingEvent &event, void *sender)\n            {\n                PointPicker *_this = (PointPicker *)sender;\n\n                if (_this->CheckNumberOfPoints() == false)\n                {\n                    return;\n                }\n\n                float x, y, z;\n                event.getPoint(x, y, z);\n\n                PointType p;\n                p.x = x;\n                p.y = y;\n                p.z = z;\n\n                std::vector<int> foundIndices;\n                std::vector<float> distances;\n                _this->m_search.nearestKSearch(p, 1, foundIndices, distances);\n\n                PickedPoint pickedPoint;\n                pickedPoint.index = foundIndices.at(0);\n                pickedPoint.point = _this->m_cloud->at(pickedPoint.index);\n                _this->m_pickedPoints.push_back(pickedPoint);\n\n                _this->ShowSelectedPoints();\n                _this->PrintPointsList();\n            }\n\n            static void KeyboardCallback(const pcl::visualization::KeyboardEvent &event, void *sender)\n            {\n                if (event.keyDown() == false)\n                {\n                    return;\n                }\n\n                PointPicker *_this = (PointPicker *)sender;\n\n                _this->ClearWarning();\n\n                auto keysym = event.getKeySym();\n\n                if (keysym == \"space\")\n                {\n                    if (_this->CheckEnoughPointsSelected())\n                    {\n                        _this->m_done = true;\n                    }\n                }\n                if (keysym == \"Delete\")\n                {\n                    //                _this->ClearPicker();\n\n                    //                _this->PrintPointsList();\n                    //                _this->ShowSelectedPoints();\n                }\n                if (keysym == \"BackSpace\")\n                {\n                    _this->RemoveLastPoint();\n\n                    _this->PrintPointsList();\n                    _this->ShowSelectedPoints();\n                }\n            }\n\n\n    };\n\n}\n\n#endif // POINTPICKER_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/UI/ShowCloud.h",
    "content": "#ifndef SHOWCLOUD_H\n#define SHOWCLOUD_H\n\n#include <pcl/point_types.h>\n#include <pcl/point_cloud.h>\n#include <pcl/common/common.h>\n#include <pcl/common/centroid.h>\n#include <pcl/common/transforms.h>\n\n#include <pcl/visualization/pcl_visualizer.h>\n\n#include <boost/format.hpp>\n\n#include <string>\n#include <vector>\n\n//#include <Provider/DataProvider.h>\n#include <Common.h>\n\nnamespace hpe\n{\n    template<typename PointType>\n    typename pcl::PointCloud<PointType>::Ptr MoveCloudTo(typename pcl::PointCloud<PointType>::Ptr &cloud, float x, float y, float z)\n    {\n        Eigen::Vector4d centroid;\n        pcl::compute3DCentroid(*cloud, centroid);\n\n        typename pcl::PointCloud<PointType>::Ptr result(new pcl::PointCloud<PointType>);\n        Eigen::Matrix4f translation = Eigen::Matrix4f::Identity();\n        translation(0, 3) = (x - centroid(0));\n        translation(1, 3) = (y - centroid(1));\n        translation(2, 3) = (z - centroid(2));\n\n        pcl::demeanPointCloud(*cloud, centroid, *result);\n\n        return result;\n    }\n\n\n    template<typename PointType>\n    void ShowTwoClouds(typename pcl::PointCloud<PointType>::Ptr &cloud1,\n                       typename pcl::PointCloud<PointType>::Ptr &cloud2,\n                       std::string caption = \"Two Clouds\",\n                       int waitMillisecods = 0)\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n        viewer.addPointCloud(cloud1, \"1\");\n        viewer.addPointCloud(cloud2, \"2\");\n\n        if (waitMillisecods == 0)\n        {\n            while (viewer.wasStopped() == false)\n            {\n                viewer.spinOnce(100);\n                boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n            }\n        }\n        else\n        {\n            boost::this_thread::sleep(boost::posix_time::milliseconds(waitMillisecods));\n        }\n    }\n\n    template<typename PointType>\n    std::shared_ptr<pcl::visualization::Camera> ShowCloud(typename pcl::PointCloud<PointType>::Ptr &cloud,\n            std::string caption = \"Cloud\",\n            std::shared_ptr<pcl::visualization::Camera> camera = nullptr,\n            double r = 0, double g = 0, double b = 0)\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n\n        auto t = MoveCloudTo<PointType>(cloud, 0, 0, 0);\n        viewer.addPointCloud(t, \"1\");\n\n        if (camera != nullptr)\n        {\n            viewer.setCameraParameters(*camera);\n        }\n\n        viewer.setBackgroundColor(r, g, b);\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, \"1\");\n\n        while (viewer.wasStopped() == false)\n        {\n            viewer.spinOnce(100);\n            boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n        }\n\n        std::vector<pcl::visualization::Camera> cameras;\n        viewer.getCameras(cameras);\n\n        std::shared_ptr<pcl::visualization::Camera> result(new pcl::visualization::Camera);\n        *result = cameras[0];\n\n        return result;\n    }\n\n    template<typename PointType>\n    void ShowAllClouds(std::vector<typename pcl::PointCloud<PointType>::Ptr> &clouds, std::string caption = \"Clouds\")\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n\n        auto it = clouds.begin();\n        int i = 0;\n        for (; it != clouds.end(); ++it)\n        {\n            std::string cloudName = (boost::format(\"%1%\") % i++).str();\n            viewer.addPointCloud(*it, cloudName);\n        }\n\n        while (viewer.wasStopped() == false)\n        {\n            viewer.spinOnce(100);\n            boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n        }\n    }\n\n    template<typename PointType>\n    void ShowTwoCloudsInDifferentColors(typename pcl::PointCloud<PointType>::Ptr &cloud1,\n                                        typename pcl::PointCloud<PointType>::Ptr &cloud2,\n                                        pcl::CorrespondencesPtr correspondences,\n                                        std::string caption = \"Two clouds different color\",\n                                        int waitMillisecods = 0)\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n        viewer.addCoordinateSystem();\n        viewer.addPointCloud<PointType>(cloud1, \"1\");\n        viewer.addPointCloud<PointType>(cloud2, \"2\");\n\n        if (correspondences != nullptr)\n        {\n            viewer.addCorrespondences<PointType>(cloud1, cloud2, *correspondences);\n        }\n\n\n\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"1\");\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, \"2\");\n\n        if (waitMillisecods == 0)\n        {\n            while (viewer.wasStopped() == false)\n            {\n                viewer.spinOnce(100);\n                boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n            }\n        }\n        else\n        {\n            viewer.spinOnce(waitMillisecods);\n        }\n\n    }\n\n    template<typename PointType>\n    void ShowCloudInColor(typename pcl::PointCloud<PointType>::Ptr &cloud,\n                          std::shared_ptr<pcl::visualization::Camera> camera = nullptr,\n                          double r = 0, double g = 0, double b = 1,\n                          double rb = 0, double gb = 0, double bb = 0, double pointSize = 3,\n                          std::string caption = \"Cloud in color\")\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n        viewer.addPointCloud(cloud, \"1\");\n\n        if (camera != nullptr)\n        {\n            viewer.setCameraParameters(*camera);\n        }\n\n        viewer.setBackgroundColor(rb, gb, bb);\n\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, r, g, b, \"1\");\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, pointSize, \"1\");\n\n\n        while (viewer.wasStopped() == false)\n        {\n            viewer.spinOnce(100);\n            boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n        }\n    }\n\n    template<typename PointType>\n    void ShowTwoCloudsInDifferentColors(typename pcl::PointCloud<PointType>::Ptr &cloud1,\n                                        typename pcl::PointCloud<PointType>::Ptr &cloud2,\n                                        std::string caption = \"Two clouds different color\",\n                                        int waitMillisecods = 0)\n    {\n        ShowTwoCloudsInDifferentColors<PointType>(cloud1, cloud2, pcl::CorrespondencesPtr(), caption, waitMillisecods);\n    }\n\n    template<typename PointType>\n    void ShowCloudWithLandmarks(typename pcl::PointCloud<PointType>::Ptr &cloud,\n                                typename Common<PointType>::Landmarks &landmarks,\n                                std::string caption = \"Landmarks\", bool showLandmarksNumbers = false)\n    {\n        typename pcl::PointCloud<PointType>::Ptr markerCloud(new pcl::PointCloud<PointType>);\n\n        for (auto it = landmarks.begin(); it != landmarks.end(); ++it)\n        {\n            markerCloud->push_back((*it).point);\n        }\n\n        pcl::visualization::PCLVisualizer viewer(caption);\n        viewer.addPointCloud<PointType>(cloud, \"Main Cloud\");\n        viewer.addPointCloud<PointType>(markerCloud, \"Marker Cloud\");\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"Marker Cloud\");\n        viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, \"Marker Cloud\");\n\n        if (showLandmarksNumbers)\n        {\n            for (int i = 0; i < markerCloud->size(); i++)\n            {\n                std::string text = (boost::format(\"%1%\") % i).str();\n                viewer.addText3D(text, markerCloud->at(i), 2, 1, 1, 1, text);\n            }\n        }\n\n        while (viewer.wasStopped() == false)\n        {\n            viewer.spinOnce(100);\n            boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n        }\n    }\n}\n\n#endif // SHOWCLOUD_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/UI/ShowMesh.h",
    "content": "#pragma once\n\n#ifndef SHOWMESH_H\n#define SHOWMESH_H\n\n#include <pcl/visualization/pcl_visualizer.h>\n#include <pcl/PolygonMesh.h>\n\nnamespace hpe\n{\n    void ShowMesh(pcl::PolygonMesh::Ptr &mesh, std::string caption = \"Mesh\")\n    {\n        pcl::visualization::PCLVisualizer viewer(caption);\n\n        viewer.addPolygonMesh(*mesh, \"mesh\");\n        //viewer.resetCamera();\n        //viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, \"mesh\");\n        while (viewer.wasStopped() == false)\n        {\n            viewer.spinOnce(100);\n            boost::this_thread::sleep(boost::posix_time::microseconds(100000));\n        }\n    }\n}\n\n#endif // SHOWMESH_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/UI/VisualizerCommon.h",
    "content": "#ifndef VISUALIZERCOMMON_H\n#define VISUALIZERCOMMON_H\n\n#include <vtkPolyDataMapper.h>\n#include <vtkActor.h>\n#include <vtkSmartPointer.h>\n#include <vtkRenderWindow.h>\n#include <vtkRenderer.h>\n#include <vtkRenderWindowInteractor.h>\n#include <vtkPolyData.h>\n#include <vtkSphereSource.h>\n#include <vtkWindowToImageFilter.h>\n#include <vtkPNGWriter.h>\n\nnamespace hpe\n{\n    void SaveVisualizationScreenshot(std::string filename, vtkSmartPointer<vtkRenderWindow> &renderWindow)\n    {\n        renderWindow->Render();\n        vtkSmartPointer<vtkWindowToImageFilter> windowToImageFilter =\n            vtkSmartPointer<vtkWindowToImageFilter>::New();\n        windowToImageFilter->SetInput(renderWindow);\n        //windowToImageFilter->SetMagnification(1); //set the resolution of the output image (3 times the current resolution of vtk render window)\n        windowToImageFilter->SetInputBufferTypeToRGBA(); //also record the alpha (transparency) channel\n        windowToImageFilter->Update();\n\n        vtkSmartPointer<vtkPNGWriter> writer =\n            vtkSmartPointer<vtkPNGWriter>::New();\n        writer->SetFileName(filename.c_str());\n        writer->SetInputConnection(windowToImageFilter->GetOutputPort());\n        writer->Write();\n    }\n}\n#endif // VISUALIZERCOMMON_H\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Grabber/KinectSDKGrabber.cpp",
    "content": "#include \"KinectSDKGrabber.h\"\n\n#include <DataObject/MapDataStorage.h>\n#include <Exception/HPEException.h>\n\n#include <pcl/common/time.h>\n\nnamespace hpe\n{\n\n    KinectSDKGrabber::KinectSDKGrabber(void)\n    {\n        m_colorCoordinates = new LONG[FrameBufferSize * 2];\n    }\n\n    KinectSDKGrabber::~KinectSDKGrabber(void)\n    {\n        delete [] m_colorCoordinates;\n    }\n\n    IDataStorage::Ptr KinectSDKGrabber::CreateDataStorage()\n    {\n        MapDataStorage::Ptr storage(new MapDataStorage);\n        IDataObject::Ptr mapper(new KinectCoordinatesMapper(m_colorCoordinates));\n        storage->Set(\"CoordinatesMapper\", mapper);\n        return storage;\n    }\n\n    void KinectSDKGrabber::PreRun()\n    {\n        HRESULT hr = CreateFirstConnected();\n        if (FAILED(hr))\n        {\n            throw HPEException(\"KinectSDKGrabber::PreRun() - CreateFirstConnected() failed\");\n        }\n    }\n\n    bool KinectSDKGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame)\n    {\n        WaitForSingleObject(m_hNextDepthFrameEvent, INFINITE);\n        WaitForSingleObject(m_hNextColorFrameEvent, INFINITE);\n\n        colorFrame = cv::Mat::zeros(FrameHeight, FrameWidth, CV_8UC4);\n        depthFrame = cv::Mat::zeros(FrameHeight, FrameWidth, CV_32FC1);\n\n        HRESULT hr = ProcessDepth((float *)depthFrame.data);\n        if (FAILED(hr))\n        {\n            return false;\n        }\n\n        hr = ProcessColor(colorFrame);\n        if (FAILED(hr))\n        {\n            return false;\n        }\n\n        return true;\n    }\n\n    HRESULT KinectSDKGrabber::CreateFirstConnected()\n    {\n        INuiSensor *pNuiSensor;\n        HRESULT hr;\n\n        int iSensorCount = 0;\n        hr = NuiGetSensorCount(&iSensorCount);\n        if (FAILED(hr))\n        {\n            return hr;\n        }\n        std::cout << \"Count\" << std::endl;\n\n        // Look at each Kinect sensor\n        for (int i = 0; i < iSensorCount; ++i)\n        {\n            // Create the sensor so we can check status, if we can't create it, move on to the next\n            hr = NuiCreateSensorByIndex(i, &pNuiSensor);\n            if (FAILED(hr))\n            {\n                continue;\n            }\n            std::cout << \"Created\" << std::endl;\n\n            // Get the status of the sensor, and if connected, then we can initialize it\n            hr = pNuiSensor->NuiStatus();\n            if (S_OK == hr)\n            {\n                m_pNuiSensor = pNuiSensor;\n                break;\n            }\n\n            // This sensor wasn't OK, so release it since we're not using it\n            pNuiSensor->Release();\n        }\n\n        if (NULL != m_pNuiSensor)\n        {\n            // Initialize the Kinect and specify that we'll be using depth\n            hr = m_pNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);\n            if (SUCCEEDED(hr))\n            {\n                // Create an event that will be signaled when depth data is available\n                m_hNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);\n\n                // Open a depth image stream to receive depth frames\n                hr = m_pNuiSensor->NuiImageStreamOpen(\n                         NUI_IMAGE_TYPE_DEPTH,\n                         NUI_IMAGE_RESOLUTION_640x480,\n                         0,\n                         2,\n                         m_hNextDepthFrameEvent,\n                         &m_pDepthStreamHandle);\n\n                m_pNuiSensor->NuiImageStreamSetImageFrameFlags(m_pDepthStreamHandle, NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE);\n\n                std::cout << \"Depth\" << std::endl;\n\n            }\n\n            if (SUCCEEDED(hr))\n            {\n                m_hNextColorFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);\n\n                hr = m_pNuiSensor->NuiImageStreamOpen(\n                         NUI_IMAGE_TYPE_COLOR,\n                         NUI_IMAGE_RESOLUTION_640x480,\n                         0,\n                         2,\n                         m_hNextColorFrameEvent,\n                         &m_pColorStreamHandle);\n                std::cout << \"Color\" << std::endl;\n            }\n        }\n\n        if (NULL == m_pNuiSensor || FAILED(hr))\n        {\n            return E_FAIL;\n        }\n\n        return hr;\n    }\n\n    HRESULT KinectSDKGrabber::ProcessDepth(float *buffer)\n    {\n        HRESULT hr = S_OK;\n        NUI_IMAGE_FRAME imageFrame;\n\n        // Attempt to get the depth frame\n        hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame);\n        HRESULT nodata = E_NUI_FRAME_NO_DATA;\n        if (FAILED(hr) && hr != E_NUI_FRAME_NO_DATA)\n        {\n            std::cout << hr << std::endl;\n            return hr;\n        }\n\n        BOOL nearMode;\n        INuiFrameTexture *pTexture = imageFrame.pFrameTexture;\n        NUI_LOCKED_RECT LockedRect;\n\n        // Lock the frame data so the Kinect knows not to modify it while we're reading it\n        pTexture->LockRect(0, &LockedRect, NULL, 0);\n\n        m_pNuiSensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(\n            NUI_IMAGE_RESOLUTION_640x480,\n            NUI_IMAGE_RESOLUTION_640x480,\n            FrameBufferSize,\n            (USHORT *)LockedRect.pBits,\n            FrameBufferSize * 2,\n            m_colorCoordinates\n        );\n\n        pTexture->UnlockRect(0);\n\n        // Get the depth image pixel texture\n        hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &imageFrame, &nearMode, &pTexture);\n        if (FAILED(hr))\n        {\n            goto ReleaseFrame;\n        }\n\n        // Lock the frame data so the Kinect knows not to modify it while we're reading it\n        pTexture->LockRect(0, &LockedRect, NULL, 0);\n\n        // Make sure we've received valid data\n        if (LockedRect.Pitch != 0)\n        {\n            const NUI_DEPTH_IMAGE_PIXEL *pBufferRun = reinterpret_cast<const NUI_DEPTH_IMAGE_PIXEL *>(LockedRect.pBits);\n\n            // end pixel is start + width*height - 1\n            const NUI_DEPTH_IMAGE_PIXEL *pBufferEnd = pBufferRun + FrameBufferSize;\n\n\n            int idx = 0;\n            while (pBufferRun < pBufferEnd)\n            {\n                float depth = pBufferRun->depth;\n                if (depth != 0)\n                {\n                    buffer[idx] = depth / 1000.f;\n                }\n                else\n                {\n                    buffer[idx] = std::numeric_limits<float>::quiet_NaN();\n                }\n                ++pBufferRun;\n                ++idx;\n            }\n\n            hr = S_OK;\n        }\n        else\n        {\n            hr = E_FAIL;\n        }\n\n        // We're done with the texture so unlock it\n        pTexture->UnlockRect(0);\n\n        pTexture->Release();\n\n    ReleaseFrame:\n        // Release the frame\n        m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame);\n\n        return hr;\n    }\n\n    HRESULT KinectSDKGrabber::ProcessColor(cv::Mat &mat)\n    {\n        NUI_IMAGE_FRAME imageFrame;\n\n        HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pColorStreamHandle, 0, &imageFrame);\n        if (FAILED(hr))\n        {\n            return hr;\n        }\n\n        NUI_LOCKED_RECT LockedRect;\n        hr = imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);\n        if (FAILED(hr))\n        {\n            goto ReleaseFrame;\n        }\n\n        if (LockedRect.Pitch != 0)\n        {\n            if (0)\n            {\n                int idx = 0;\n                while (idx < FrameBufferSize)\n                {\n                    *(mat.data + 3 * idx + 2) = *(LockedRect.pBits + 4 * idx + 2);\n                    *(mat.data + 3 * idx + 1) = *(LockedRect.pBits + 4 * idx + 1);\n                    *(mat.data + 3 * idx + 0) = *(LockedRect.pBits + 4 * idx + 0);\n                    ++idx;\n                }\n            }\n            else\n            {\n                memcpy(mat.data, LockedRect.pBits, LockedRect.size);\n            }\n\n            hr = S_OK;\n        }\n        else\n        {\n            hr = E_FAIL;\n        }\n\n\n        imageFrame.pFrameTexture->UnlockRect(0);\n\n    ReleaseFrame:\n        hr = m_pNuiSensor->NuiImageStreamReleaseFrame(m_pColorStreamHandle, &imageFrame);\n\n        return hr;\n\n    }\n\n    INuiCoordinateMapper *KinectSDKGrabber::GetCoordinateMapper()\n    {\n        if (m_pNuiSensor)\n        {\n            INuiCoordinateMapper *mapper;\n            HRESULT hr = m_pNuiSensor->NuiGetCoordinateMapper(&mapper);\n            if (SUCCEEDED(hr))\n            {\n                return mapper;\n            }\n            else\n            {\n                return NULL;\n            }\n        }\n        return NULL;\n    }\n\n}"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Grabber/KinectSDKGrabber.h",
    "content": "#pragma once\n\n#ifndef KINECTSDKGRABBER_H\n#define KINECTSDKGRABBER_H\n\n#include <Grabber/GrabberBase.h>\n#include <DataObject/IDataObject.h>\n\n#include <Windows.h>\n#include <NuiApi.h>\n\nnamespace hpe\n{\n    class KinectCoordinatesMapper : public IDataObject\n    {\n        public:\n            KinectCoordinatesMapper() : correspondencies(nullptr) {}\n            KinectCoordinatesMapper(LONG *c) : correspondencies(c) {}\n            LONG *correspondencies;\n    };\n\n    class KinectSDKGrabber : public GrabberBase\n    {\n        public:\n            KinectSDKGrabber(void);\n            ~KinectSDKGrabber(void);\n\n            INuiCoordinateMapper *GetCoordinateMapper();\n\n        protected:\n            bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) override;\n            IDataStorage::Ptr CreateDataStorage() override;\n            void PreRun() override;\n\n        private:\n            static const int FrameWidth = 640;\n            static const int FrameHeight = 480;\n            static const int FrameBufferSize = FrameWidth *FrameHeight;\n\n            HRESULT CreateFirstConnected();\n            HRESULT ProcessDepth(float *buffer);\n            HRESULT ProcessColor(cv::Mat &mat);\n\n            INuiSensor *m_pNuiSensor;\n            HANDLE m_pDepthStreamHandle;\n            HANDLE m_hNextDepthFrameEvent;\n            HANDLE m_pColorStreamHandle;\n            HANDLE m_hNextColorFrameEvent;\n\n            LONG *m_colorCoordinates;\n    };\n\n}\n\n#endif"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Processor/KinectSDKConverterProcessor.cpp",
    "content": "#include \"KinectSDKConverterProcessor.h\"\n\n#include <boost/math/special_functions/fpclassify.hpp>\n#include <boost/math/special_functions/round.hpp>\n\n#include <Windows.h>\n#include <NuiApi.h>\n\n#include <WindowsOnly/Grabber/KinectSDKGrabber.h>\n#include <Exception/HPEException.h>\n#include <DataObject/RawFrames.h>\n#include <DataObject/CloudXYZRGBA.h>\n\nnamespace hpe\n{\n    typedef union\n    {\n        struct\n        {\n            unsigned char Blue;\n            unsigned char Green;\n            unsigned char Red;\n            unsigned char Alpha;\n        };\n        float float_value;\n        uint32_t long_value;\n    } RGBValue;\n\n    KinectSDKConverterProcessor::KinectSDKConverterProcessor() : m_cloudKey(\"Cloud\")\n    {\n    }\n\n    KinectSDKConverterProcessor::KinectSDKConverterProcessor(std::string key) : m_cloudKey(key)\n    {\n    }\n\n    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr KinectSDKConverterProcessor::DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr)\n    {\n        cv::Size depthImageSize = depth.size();\n\n        float centerX, centerY;\n        float scaleFactorX, scaleFactorY;\n\n        ComputeCameraIntrinsics(depthImageSize, centerX, centerY, scaleFactorX, scaleFactorY);\n\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr result(new pcl::PointCloud<pcl::PointXYZRGBA>);\n        result->width = depthImageSize.width;\n        result->height = depthImageSize.height;\n        result->is_dense = false;\n        result->points.resize(depthImageSize.width * depthImageSize.height);\n\n        for (int y = 0; y < depthImageSize.height; ++y)\n        {\n            for (int x = 0; x < depthImageSize.width; ++x)\n            {\n                pcl::PointXYZRGBA &pt = result->at(x, y);\n\n                float depthValue = depth.at<float>(y, x);\n\n                if (depthValue == -1000.f || depthValue == 0.0f || boost::math::isnan(depthValue))\n                {\n                    pt.x = pt.y = pt.z = std::numeric_limits<float>::quiet_NaN();\n                }\n                else\n                {\n                    //depthValue /= 1000;\n                    pt.x = (static_cast<float>(x) - centerX) * scaleFactorX * depthValue;\n                    pt.y = (centerY - static_cast<float>(y)) * scaleFactorY * depthValue;\n                    pt.z = depthValue;\n\n                    int depthIndex = x + y * 640;\n\n                    LONG actX = m_correspondencies[depthIndex * 2];\n                    LONG actY = m_correspondencies[depthIndex * 2 + 1];\n\n                    if (actX >= bgr.cols || actX < 0 || actY < 0 || actY >= bgr.rows)\n                    {\n                        pt.rgba = 0;\n                    }\n                    else\n                    {\n                        cv::Vec4b colorValue = bgr.at<cv::Vec4b>(actY, actX);\n                        RGBValue color;\n                        color.Red = colorValue[2];\n                        color.Green = colorValue[1];\n                        color.Blue = colorValue[0];\n                        pt.rgba = color.long_value;\n                    }\n                }\n            }\n        }\n\n        return result;\n    }\n\n    void KinectSDKConverterProcessor::ComputeCameraIntrinsics(cv::Size &imageSize, float &centerX, float &centerY, float &scaleFactorX, float &scaleFactorY)\n    {\n        int dims[] = {imageSize.width, imageSize.height};\n\n        // The 525 factor default is only true for VGA. If not, we should scale\n        scaleFactorX = scaleFactorY = 1 / 525.f * 640.f / dims[0];\n        centerX = (static_cast<float>(dims[0]) - 1.f) / 2.f;\n        centerY = (static_cast<float>(dims[1]) - 1.f) / 2.f;\n    }\n\n    void KinectSDKConverterProcessor::Process(IDataStorage::Ptr dataStorage)\n    {\n        std::shared_ptr<KinectCoordinatesMapper> mapperObject = std::dynamic_pointer_cast<KinectCoordinatesMapper>(dataStorage->Get(\"CoordinatesMapper\"));\n        if (mapperObject.get() == nullptr)\n        {\n            throw HPEException(\"KinectSDKConverterProcessor::Process - mapperObject.get() == nullptr\");\n        }\n        m_correspondencies = mapperObject->correspondencies;\n\n        IDataObject::Ptr object = dataStorage->Get(\"RawFrames\");\n        RawFrames::Ptr rawFrames = std::dynamic_pointer_cast<RawFrames>(object);\n        if (rawFrames.get() == nullptr)\n        {\n            throw HPEException(\"KinectSDKConverterProcessor::Process - rawFrames.get() == nullptr\");\n        }\n\n        CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA);\n        cloudObject->cloud = DepthRGBToPointCloud(rawFrames->depthFrame, rawFrames->colorFrame);\n        cloudObject->cloud->width = cloudObject->cloud->width * cloudObject->cloud->height;\n        cloudObject->cloud->height = 1;\n\n        dataStorage->Set(m_cloudKey, cloudObject);\n    }\n\n}\n"
  },
  {
    "path": "FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Processor/KinectSDKConverterProcessor.h",
    "content": "#ifndef KINECTSDKCONVERTERPROCESSOR_H\n#define KINECTSDKCONVERTERPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n\n#include <opencv2/opencv.hpp>\n\n#include <pcl/point_cloud.h>\n#include <pcl/point_types.h>\n\n#include <Windows.h>\n\nnamespace hpe\n{\n    class KinectSDKConverterProcessor : public IProcessor\n    {\n        public:\n            KinectSDKConverterProcessor();\n            KinectSDKConverterProcessor(std::string key);\n            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr);\n            void Process(IDataStorage::Ptr dataStorage) override;\n        private:\n            void ComputeCameraIntrinsics(cv::Size &imageSize, float &centerX, float &centerY, float &scaleFactorX, float &scaleFactorY);\n\n            LONG *m_correspondencies;\n            std::string m_cloudKey;\n    };\n}\n\n#endif // KINECTDATACONVERTER_H\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\n\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n\nset(HEAD_POSE_ESTIMATION \"HeadPoseEstimationFramework\")\n\ncmake_policy(SET CMP0015 OLD)\n\nproject(TemplateCreator)\n\nFILE (GLOB SRC_LIST *.cpp *.h)\n\nfind_package(PCL 1.7 REQUIRED)\nfind_package(OpenCV REQUIRED)\n\ninclude_directories(${PCL_INCLUDE_DIRS} ../${HEAD_POSE_ESTIMATION})\nlink_directories(${PCL_LIBRARY_DIRS} ${CMAKE_BINARY_DIR} ../${HEAD_POSE_ESTIMATION}-build)\nadd_definitions(${PCL_DEFINITIONS})\n\nadd_executable(${PROJECT_NAME} ${SRC_LIST})\n\ntarget_link_libraries(${PROJECT_NAME} ${PCL_LIBRARIES} ${OpenCV_LIBS} ${HEAD_POSE_ESTIMATION})\n\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/TemplateCreator.cpp",
    "content": "// TemplateCreator.cpp : Defines the entry point for the console application.\n//\n\n#include \"stdafx.h\"\n\n#include <Filter/MovingLeastSquaresFilter.h>\n#include <Filter/Filters.h>\n\n#include <UI/ShowCloud.h>\n#include <UI/PointPicker.h>\n\n#include <Landmarks.h>\n#include <Exception/HPEException.h>\n\n#include \"UIProcessor.h\"\n\nusing namespace hpe;\ntypedef pcl::PointXYZRGBA PointType;\n\n\nint main(int argc, char *argv[])\n{\n    HPESettings settings;\n\n    try\n    {\n#ifdef __linux__\n        OpenNIGrabber grabber;\n#else\n        KinectSDKGrabber grabber;\n#endif\n\n        std::shared_ptr<hpe::TemplateCreatorProcessor> templateCreator(new TemplateCreatorProcessor);\n        std::shared_ptr<UIProcessor> uiProcessor(new UIProcessor(\"Cloud\"));\n        uiProcessor->SetGrabber(&grabber);\n\n        grabber.AddProcessor(IProcessor::Ptr(new DepthPreprocessingProcessor(1, 3, 5, 5, 1.2)));\n\n#ifndef __linux__\n        grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor));\n#endif\n\n        grabber.AddProcessor(IProcessor::Ptr(new ConverterProcessor(ConverterProcessor::ConverterPtr(new KinectDataConverter))));\n\n        grabber.AddProcessor(IProcessor::Ptr(new HeadExtractionProcessor));\n        grabber.AddProcessor(std::static_pointer_cast<IProcessor>(templateCreator));\n        grabber.AddProcessor(std::static_pointer_cast<IProcessor>(uiProcessor));\n\n\n        grabber.Start();\n\n        auto t = templateCreator->GetTemplate();\n\n        t = Voxelize<pcl::PointXYZRGBA>(t, 0.0025);\n\n        if (t->size() != 0)\n        {\n            MovingLeastSquaresFilter<pcl::PointXYZRGBA> mls;\n            auto resampledTemplate = mls.Filter(t);\n\n            PointPicker<pcl::PointXYZRGBA> picker;\n            picker.SetCloud(resampledTemplate);\n            Common<pcl::PointXYZRGBA>::Landmarks l = picker.Pick(\"Select the eyes and the nose\", 3, \"Select the left eye center, \"\n                                                                 \"then the right eye center, then the tip of the nose.\\n\"\n                                                                 \"NOTE: Make sure use see the front part of the face, not the backward facing part\");\n\n            pcl::io::savePCDFileBinary(settings.GetString(\"Template\"), *resampledTemplate);\n            hpe::SaveLandmarks<pcl::PointXYZRGBA>(l, settings.GetString(\"Landmarks\"));\n        }\n    }\n    catch (HPEException ex)\n    {\n        std::cout << ex.what() << std::endl;\n        std::cin.ignore();\n    }\n\n    return 0;\n}\n\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/TemplateCreator.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{39EBEEC1-29C0-48BA-9858-E593CAEE7757}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>TemplateCreator</RootNamespace>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n    <Import Project=\"..\\Config\\CommonPropertiesLibsDebug.props\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n    <Import Project=\"..\\Config\\CommonPropertiesLibsRelease.props\" />\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <LibraryPath>$(SolutionDir)$(Platform)\\$(Configuration)\\;$(LibraryPath)</LibraryPath>\n    <IncludePath>../HeadPoseEstimationFramework;$(IncludePath)</IncludePath>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <LinkIncremental>false</LinkIncremental>\n    <IncludePath>../HeadPoseEstimationFramework;$(IncludePath)</IncludePath>\n    <LibraryPath>$(SolutionDir)$(Platform)\\$(Configuration)\\;$(LibraryPath)</LibraryPath>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup>\n    <ClInclude Include=\"stdafx.h\" />\n    <ClInclude Include=\"UIProcessor.h\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"stdafx.cpp\">\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">Create</PrecompiledHeader>\n    </ClCompile>\n    <ClCompile Include=\"TemplateCreator.cpp\" />\n    <ClCompile Include=\"UIProcessor.cpp\" />\n  </ItemGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "FaceCept3D/TemplateCreator/UIProcessor.cpp",
    "content": "#include \"stdafx.h\"\n#include \"UIProcessor.h\"\n\nUIProcessor::UIProcessor(void)\n    : m_grabber(nullptr), m_templateCreationTriggered(false)\n{\n}\n\nUIProcessor::UIProcessor(std::string key)\n    : ShowCloudProcessor(key), m_grabber(nullptr), m_templateCreationTriggered(false)\n{\n}\n\n\nUIProcessor::~UIProcessor(void)\n{\n}\n\nvoid UIProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event)\n{\n    if (event.keyDown())\n    {\n        std::string sym = event.getKeySym();\n        std::cout << sym << std::endl;\n        if (sym == \"Escape\")\n        {\n            if (m_grabber)\n            {\n                m_grabber->Stop();\n            }\n        }\n    }\n}\n\nvoid UIProcessor::SetGrabber(hpe::GrabberBase *grabber)\n{\n    m_grabber = grabber;\n}\n\npcl::PointCloud<pcl::PointXYZRGBA>::Ptr UIProcessor::GetTemplate()\n{\n    return m_template;\n}\n\nhpe::Common<pcl::PointXYZRGBA>::Landmarks UIProcessor::GetLandmarks()\n{\n    return m_landmarks;\n}\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/UIProcessor.h",
    "content": "#pragma once\n\n#ifndef UIPROCESSOR_H\n#define UIPROCESSOR_H\n\n#include <Processor/ShowCloudProcessor.h>\n#include <Processor/TemplateCreatorProcessor.h>\n#include <Grabber/GrabberBase.h>\n#include <Common.h>\n\nclass UIProcessor : public hpe::ShowCloudProcessor\n{\n    public:\n        UIProcessor(void);\n        UIProcessor(std::string key);\n        ~UIProcessor(void);\n\n        void SetGrabber(hpe::GrabberBase *grabber);\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr GetTemplate();\n        hpe::Common<pcl::PointXYZRGBA>::Landmarks GetLandmarks();\n\n    protected:\n        void KeyPressed(const pcl::visualization::KeyboardEvent &event);\n\n    private:\n        hpe::GrabberBase *m_grabber;\n        bool m_templateCreationTriggered;\n\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr m_template;\n        hpe::Common<pcl::PointXYZRGBA>::Landmarks m_landmarks;\n        std::shared_ptr<hpe::TemplateCreatorProcessor> m_templateCreator;\n};\n\n#endif\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/settings.ini",
    "content": "Template=MyTemplate.pcd\nLandmarks=MyTemplateLandmarks.txt\n\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/stdafx.cpp",
    "content": "// stdafx.cpp : source file that includes just the standard includes\n// TemplateCreator.pch will be the pre-compiled header\n// stdafx.obj will contain the pre-compiled type information\n\n#include \"stdafx.h\"\n\n// TODO: reference any additional headers you need in STDAFX.H\n// and not in this file\n"
  },
  {
    "path": "FaceCept3D/TemplateCreator/stdafx.h",
    "content": "// stdafx.h : include file for standard system include files,\n// or project specific include files that are used frequently, but\n// are changed infrequently\n//\n\n#pragma once\n\n#define _CRT_SECURE_NO_WARNINGS\n#include <stdio.h>\n#undef _CRT_SECURE_NO_WARNINGS\n\n#include <pcl/io/pcd_io.h>\n#include <pcl/visualization/cloud_viewer.h>\n#include <pcl/search/kdtree.h>\n#include <pcl/common/time.h>\n\n#include <Settings/HPESettings.h>\n\n#ifndef __linux__\n\t#include <objbase.h>\n    #include <WindowsOnly/Grabber/KinectSDKGrabber.h>\n    #include <WindowsOnly/Processor/KinectSDKConverterProcessor.h>\n#else\n\t#include <Grabber/OpenNIGrabber.h>\n#endif\n\n\n#include <Processor/ConverterProcessor.h>\n#include <Processor/DepthPreprocessingProcessor.h>\n#include <Processor/HeadExtractionProcessor.h>\n#include <Processor/TemplateCreatorProcessor.h>\n#include <Processor/ShowCloudProcessor.h>\n\n#include <Converter/KinectDataConverter.h>\n\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/ShowTwoCloudsProcessor.cpp",
    "content": "#include \"stdafx.h\"\n#include \"ShowTwoCloudsProcessor.h\"\n\n#include <DataObject/CloudXYZRGBA.h>\n#include <Grabber/PCDGrabber.h>\n\nusing namespace hpe;\n\nShowTwoCloudsProcessor::ShowTwoCloudsProcessor(void)\n    : m_cloud1Key(\"Cloud1\"), m_cloud2Key(\"Cloud2\"), m_first(true), m_visualizer(\"Two Clouds\")\n{\n}\n\nShowTwoCloudsProcessor::~ShowTwoCloudsProcessor(void)\n{\n}\n\nvoid ShowTwoCloudsProcessor::SetCloud1Key(std::string key)\n{\n    m_cloud1Key = key;\n}\n\nvoid ShowTwoCloudsProcessor::SetCloud2Key(std::string key)\n{\n    m_cloud2Key = key;\n}\n\nvoid ShowTwoCloudsProcessor::Process(hpe::IDataStorage::Ptr dataStorage)\n{\n    CloudXYZRGBA::Ptr cloud1 = dataStorage->GetAndCast<CloudXYZRGBA>(m_cloud1Key);\n    CloudXYZRGBA::Ptr cloud2 = dataStorage->GetAndCast<CloudXYZRGBA>(m_cloud2Key);\n    if (cloud1.get() == nullptr || cloud2.get() == nullptr)\n    {\n        return;\n    }\n\n    if (m_first)\n    {\n        m_first = false;\n        m_visualizer.addPointCloud(cloud1->cloud, m_cloud1Key);\n        m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, m_cloud1Key);\n        m_visualizer.addPointCloud(cloud2->cloud, m_cloud2Key);\n        m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, m_cloud2Key);\n    }\n    else\n    {\n        m_visualizer.updatePointCloud(cloud1->cloud, m_cloud1Key);\n        m_visualizer.updatePointCloud(cloud2->cloud, m_cloud2Key);\n    }\n\n    m_visualizer.spinOnce(1);\n\n    PCDGrabber::CloudFileInfo::Ptr fileInfo = dataStorage->GetAndCast<PCDGrabber::CloudFileInfo>(\"FileInfo\");\n    if (fileInfo.get() != nullptr)\n    {\n        std::string screenshotPath = boost::replace_all_copy(fileInfo->Filename, \".pcd\", \".png\");\n        //m_visualizer.saveScreenshot(screenshotPath);\n    }\n}\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/ShowTwoCloudsProcessor.h",
    "content": "#pragma once\n\n#ifndef SHOWTWOCLOUDSPROCESSOR_H\n#define SHOWTWOCLOUDSPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <pcl/visualization/pcl_visualizer.h>\n\nclass ShowTwoCloudsProcessor : public hpe::IProcessor\n{\n    public:\n        ShowTwoCloudsProcessor(void);\n        ~ShowTwoCloudsProcessor(void);\n\n        void SetCloud1Key(std::string key);\n        void SetCloud2Key(std::string key);\n\n        void Process(hpe::IDataStorage::Ptr dataStorage);\n\n    private:\n        std::string m_cloud1Key;\n        std::string m_cloud2Key;\n        pcl::visualization::PCLVisualizer m_visualizer;\n        bool m_first;\n};\n\n#endif"
  },
  {
    "path": "FaceCept3D/TemplateTracker/TemplateTracker.cpp",
    "content": "// TemplateTracker.cpp : Defines the entry point for the console application.\n//\n\n#include \"stdafx.h\"\n\n#include \"ShowTwoCloudsProcessor.h\"\n#include \"VoxelizeProcessor.h\"\n#include \"UIProcessor.h\"\n#include \"VisuilizeProcessor.h\"\n\n#include <UI/PointPicker.h>\n#include <Common.h>\n#include <Landmarks.h>\n#include <Grabber/ProviderGrabber.h>\n\n#include <Filter/Filters.h>\n#include <Filter/FunctorFilter.h>\n\n#include <Processor/ConverterProcessor.h>\n#include <Processor/DetectorProcessor.h>\n#include <Processor/FilterProcessor.h>\n\n#include <boost/bind.hpp>\n\n\nusing namespace hpe;\n\nint main(int argc, char *argv[])\n{\n    HPESettings settings;\n\n    KinectSDKGrabber grabber;\n\n    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr templateCloud(new pcl::PointCloud<pcl::PointXYZRGBA>);\n    pcl::io::loadPCDFile(settings.GetString(\"Template\"), *templateCloud);\n\n    std::shared_ptr<UIProcessor> uiProcessor(new UIProcessor);\n    uiProcessor->SetGrabber(&grabber);\n\n    std::shared_ptr<DepthPreprocessingProcessor> preprocessor(new DepthPreprocessingProcessor(1, 3, 13, 7, 1.2));\n\n    std::shared_ptr<hpe::TrackingProcessor> templateTracker(new hpe::TrackingProcessor(templateCloud));\n    templateTracker->SetTemplateLandmarks(LoadLandmarks<pcl::PointXYZRGBA>(settings.GetString(\"Landmarks\")));\n    templateTracker->SetSaveLandmakrs(false);\n    \n\n    std::string cloudToShowKey = \"FilteredOriginalCloud\";\n    std::shared_ptr<VisuilizeProcessor> visualizer(new VisuilizeProcessor(cloudToShowKey));\n    visualizer->SetTrackingProcessor(templateTracker);\n\n    std::shared_ptr<hpe::FacialExpressionProcessor> ferProcessor(new hpe::FacialExpressionProcessor(settings.GetString(\"FERData\")));\n    ferProcessor->SubscribeFacialExpressionReadySignal(boost::bind(&VisuilizeProcessor::HandleFER, visualizer.get(), _1));\n\n\n    std::shared_ptr<FunctorFilter<pcl::PointXYZRGBA>> functorFilter(new FunctorFilter<pcl::PointXYZRGBA>(\n    [&settings](pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud)->pcl::PointCloud<pcl::PointXYZRGBA>::Ptr  {\n        return PassThrough<pcl::PointXYZRGBA>(cloud, 0., settings.GetValue<float>(\"DistanceToShow\"), \"z\");\n    }));\n\n    std::shared_ptr<FilterProcessor> filterProcessor(new FilterProcessor(functorFilter, \"OriginalCloud\", \"FilteredOriginalCloud\"));\n\n    grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor(\"OriginalCloud\")));\n    grabber.AddProcessor(filterProcessor);\n    grabber.AddProcessor(IProcessor::Ptr(new DetectorProcessor(settings.GetString(\"DetectorData\"))));\n    grabber.AddProcessor(preprocessor);\n    grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor(\"Cloud\")));\n    grabber.AddProcessor(IProcessor::Ptr(new VoxelizeProcessor(0.005)));\n    grabber.AddProcessor(templateTracker);\n    grabber.AddProcessor(ferProcessor);\n    grabber.AddProcessor(visualizer);\n\n    grabber.Start();\n\n    return 0;\n}\n\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/TemplateTracker.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{68A9917E-A682-4013-8D30-389AB45A9E0D}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>TemplateTracker</RootNamespace>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>Unicode</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n    <Import Project=\"..\\Config\\CommonPropertiesLibsDebug.props\" />\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n    <Import Project=\"..\\Config\\CommonProperties.props\" />\n    <Import Project=\"..\\Config\\CommonPropertiesLibsRelease.props\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <IncludePath>../HeadPoseEstimationFramework;$(IncludePath)</IncludePath>\n    <LibraryPath>$(SolutionDir)$(Platform)\\$(Configuration)\\;$(LibraryPath)</LibraryPath>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <LinkIncremental>false</LinkIncremental>\n    <IncludePath>../HeadPoseEstimationFramework;$(IncludePath)</IncludePath>\n    <LibraryPath>$(SolutionDir)$(Platform)\\$(Configuration)\\;$(LibraryPath)</LibraryPath>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <MultiProcessorCompilation>true</MultiProcessorCompilation>\n      <MinimalRebuild>false</MinimalRebuild>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalOptions>/NODEFAULTLIB:LIBCMTD.lib %(AdditionalOptions)</AdditionalOptions>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>Use</PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <MultiProcessorCompilation>true</MultiProcessorCompilation>\n      <MinimalRebuild>true</MinimalRebuild>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalDependencies>HeadPoseEstimationFramework_Release.lib;Kinect10.lib;opengl32.lib;MapReduceMPI.lib;mpistubs.lib;QVTK.lib;vtkalglib.lib;vtkCharts.lib;vtkCommon.lib;vtkDICOMParser.lib;vtkexoIIc.lib;vtkexpat.lib;vtkFiltering.lib;vtkfreetype.lib;vtkftgl.lib;vtkGenericFiltering.lib;vtkGeovis.lib;vtkGraphics.lib;vtkhdf5.lib;vtkHybrid.lib;vtkImaging.lib;vtkInfovis.lib;vtkIO.lib;vtkjpeg.lib;vtklibxml2.lib;vtkmetaio.lib;vtkNetCDF.lib;vtkNetCDF_cxx.lib;vtkpng.lib;vtkproj4.lib;vtkRendering.lib;vtksqlite.lib;vtksys.lib;vtktiff.lib;vtkverdict.lib;vtkViews.lib;vtkVolumeRendering.lib;vtkWidgets.lib;vtkzlib.lib;opencv_calib3d2410.lib;opencv_contrib2410.lib;opencv_core2410.lib;opencv_features2d2410.lib;opencv_flann2410.lib;opencv_gpu2410.lib;opencv_highgui2410.lib;opencv_imgproc2410.lib;opencv_legacy2410.lib;opencv_ml2410.lib;opencv_nonfree2410.lib;opencv_objdetect2410.lib;opencv_ocl2410.lib;opencv_photo2410.lib;opencv_stitching2410.lib;opencv_superres2410.lib;opencv_ts2410.lib;opencv_video2410.lib;opencv_videostab2410.lib;qhullstatic.lib;pcl_common_release.lib;pcl_features_release.lib;pcl_filters_release.lib;pcl_io_ply_release.lib;pcl_io_release.lib;pcl_kdtree_release.lib;pcl_keypoints_release.lib;pcl_octree_release.lib;pcl_outofcore_release.lib;pcl_people_release.lib;pcl_recognition_release.lib;pcl_registration_release.lib;pcl_sample_consensus_release.lib;pcl_search_release.lib;pcl_segmentation_release.lib;pcl_surface_release.lib;pcl_tracking_release.lib;pcl_visualization_release.lib;flann_cpp_s.lib;libboost_atomic-vc100-mt-1_55.lib;libboost_chrono-vc100-mt-1_55.lib;libboost_context-vc100-mt-1_55.lib;libboost_coroutine-vc100-mt-1_55.lib;libboost_date_time-vc100-mt-1_55.lib;libboost_exception-vc100-mt-1_55.lib;libboost_filesystem-vc100-mt-1_55.lib;libboost_graph-vc100-mt-1_55.lib;libboost_iostreams-vc100-mt-1_55.lib;libboost_locale-vc100-mt-1_55.lib;libboost_log-vc100-mt-1_55.lib;libboost_log_setup-vc100-mt-1_55.lib;libboost_math_c99-vc100-mt-1_55.lib;libboost_math_c99f-vc100-mt-1_55.lib;libboost_math_c99l-vc100-mt-1_55.lib;libboost_math_tr1-vc100-mt-1_55.lib;libboost_math_tr1f-vc100-mt-1_55.lib;libboost_math_tr1l-vc100-mt-1_55.lib;libboost_prg_exec_monitor-vc100-mt-1_55.lib;libboost_program_options-vc100-mt-1_55.lib;libboost_random-vc100-mt-1_55.lib;libboost_regex-vc100-mt-1_55.lib;libboost_serialization-vc100-mt-1_55.lib;libboost_signals-vc100-mt-1_55.lib;libboost_system-vc100-mt-1_55.lib;libboost_test_exec_monitor-vc100-mt-1_55.lib;libboost_thread-vc100-mt-1_55.lib;libboost_timer-vc100-mt-1_55.lib;libboost_unit_test_framework-vc100-mt-1_55.lib;libboost_wave-vc100-mt-1_55.lib;libboost_wserialization-vc100-mt-1_55.lib;phonon4.lib;QAxContainer.lib;QAxServer.lib;Qt3Support4.lib;QtCLucene4.lib;QtCore4.lib;QtDeclarative4.lib;QtDesigner4.lib;QtDesignerComponents4.lib;QtGui4.lib;QtHelp4.lib;qtmain.lib;QtMultimedia4.lib;QtNetwork4.lib;QtOpenGL4.lib;QtScript4.lib;QtScriptTools4.lib;QtSql4.lib;QtSvg4.lib;QtTest4.lib;QtUiTools.lib;QtWebKit4.lib;QtXml4.lib;QtXmlPatterns4.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup>\n    <ClInclude Include=\"ShowTwoCloudsProcessor.h\" />\n    <ClInclude Include=\"stdafx.h\" />\n    <ClInclude Include=\"UIProcessor.h\" />\n    <ClInclude Include=\"VisuilizeProcessor.h\" />\n    <ClInclude Include=\"VoxelizeProcessor.h\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"ShowTwoCloudsProcessor.cpp\" />\n    <ClCompile Include=\"stdafx.cpp\">\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">Create</PrecompiledHeader>\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">Create</PrecompiledHeader>\n    </ClCompile>\n    <ClCompile Include=\"TemplateTracker.cpp\" />\n    <ClCompile Include=\"UIProcessor.cpp\" />\n    <ClCompile Include=\"VisuilizeProcessor.cpp\" />\n    <ClCompile Include=\"VoxelizeProcessor.cpp\" />\n  </ItemGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "FaceCept3D/TemplateTracker/UIProcessor.cpp",
    "content": "#include \"stdafx.h\"\n#include \"UIProcessor.h\"\n\n#include <DataObject/RawFrames.h>\n#include <opencv2/opencv.hpp>\n\nUIProcessor::UIProcessor(void)\n    : m_grabber(nullptr), m_templateCreationTriggered(false)\n{\n}\n\nUIProcessor::UIProcessor(std::string key)\n    : m_grabber(nullptr), m_templateCreationTriggered(false)\n{\n}\n\n\nUIProcessor::~UIProcessor(void)\n{\n}\n\nvoid UIProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event)\n{\n    if (event.keyDown())\n    {\n        std::string sym = event.getKeySym();\n        std::cout << sym << std::endl;\n        if (sym == \"Escape\")\n        {\n            if (m_grabber)\n            {\n                m_grabber->Stop();\n            }\n        }\n    }\n}\n\nvoid UIProcessor::SetGrabber(hpe::GrabberBase *grabber)\n{\n    m_grabber = grabber;\n}\n\nvoid UIProcessor::Process(hpe::IDataStorage::Ptr dataStorage)\n{\n    using namespace hpe;\n    RawFrames::Ptr frames = dataStorage->GetAndCastNotNull<RawFrames>(\"RawFrames\");\n\n    cv::imshow(\"color\", frames->colorFrame);\n    cv::imshow(\"depth\", frames->depthFrame);\n\n    int key = cv::waitKey(1);\n    if (key != -1)\n    {\n        m_grabber->Stop();\n    }\n}\n\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/UIProcessor.h",
    "content": "#pragma once\n\n#ifndef UIPROCESSOR_H\n#define UIPROCESSOR_H\n\n#include <Processor/TemplateCreatorProcessor.h>\n#include <Grabber/GrabberBase.h>\n#include <Processor/IProcessor.h>\n\nclass UIProcessor : public hpe::IProcessor\n{\n    public:\n        UIProcessor(void);\n        UIProcessor(std::string key);\n        ~UIProcessor(void);\n\n        void SetGrabber(hpe::GrabberBase *grabber);\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr GetTemplate();\n\n        void Process(hpe::IDataStorage::Ptr dataStorage);\n\n    protected:\n        void KeyPressed(const pcl::visualization::KeyboardEvent &event);\n\n    private:\n        hpe::GrabberBase *m_grabber;\n        bool m_templateCreationTriggered;\n\n        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr m_template;\n        std::shared_ptr<hpe::TemplateCreatorProcessor> m_templateCreator;\n};\n\n#endif"
  },
  {
    "path": "FaceCept3D/TemplateTracker/VisuilizeProcessor.cpp",
    "content": "#include \"stdafx.h\"\n#include \"VisuilizeProcessor.h\"\n\n#include <DataObject/LandmarksObject.h>\n#include <DataObject/CloudXYZRGBA.h>\n\n#include <boost/format.hpp>\n#include <boost/filesystem.hpp>\n\n#include <math.h>\n#include <fstream>\n\nusing namespace hpe;\n\nVisuilizeProcessor::VisuilizeProcessor(std::string cloudKey)\n    : m_cloudKey(cloudKey), m_landmarksKey(\"Landmarks\"), m_visualizer(\"VisualizerProcessor\"), m_first(true), m_saveScreenshots(false), m_haveFerData(false)\n{\n    InitExpressionLabels();\n}\n\nVisuilizeProcessor::VisuilizeProcessor(bool saveScreenshots)\n    : m_cloudKey(\"Cloud\"), m_landmarksKey(\"Landmarks\"), m_visualizer(\"VisualizerProcessor\"), m_first(true), m_saveScreenshots(saveScreenshots), m_haveFerData(false)\n{\n    InitExpressionLabels();\n}\n\nVisuilizeProcessor::~VisuilizeProcessor(void)\n{\n}\n\nvoid VisuilizeProcessor::Process(hpe::IDataStorage::Ptr dataStorage)\n{\n    CloudXYZRGBA::Ptr cloud = dataStorage->GetAndCast<CloudXYZRGBA>(m_cloudKey);\n    LandmarksObject<pcl::PointXYZRGBA>::Ptr landmarks = dataStorage->GetAndCast<LandmarksObject<pcl::PointXYZRGBA>>(m_landmarksKey);\n    if (cloud.get() == nullptr || landmarks.get() == nullptr)\n    {\n        return;\n    }\n\n    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr eyes(new pcl::PointCloud<pcl::PointXYZRGBA>);\n    eyes->push_back(landmarks->landmarks[0].point);\n    eyes->push_back(landmarks->landmarks[1].point);\n\n    pcl::ModelCoefficients cylinderCoefficients;\n    cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.x);\n    cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.y);\n    cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.z);\n    cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.x - landmarks->landmarks[2].point.x) / 10);\n    cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.y - landmarks->landmarks[2].point.y) / 10);\n    cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.z - landmarks->landmarks[2].point.z) / 10);\n    cylinderCoefficients.values.push_back(0.002);\n\n    Eigen::Vector3f angles = VectorToEulerAngles(landmarks->landmarks[2].point.getVector3fMap() - landmarks->landmarks[3].point.getVector3fMap());\n    std::string anglesText = (boost::format(\"Head pose in degrees:\\n  Yaw   %5.2f\\n  Tilt  %5.2f\\n  Roll  %5.2f\") % angles(0) % angles(1) % angles(2)).str();\n\n    if (m_first)\n    {\n        m_first = false;\n\n        m_visualizer.registerKeyboardCallback(&VisuilizeProcessor::KeyboardEventCallback, this);\n\n        m_visualizer.addCylinder(cylinderCoefficients);\n        m_visualizer.setShapeRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, ((float)0x73) / 255, ((float)0x3C) / 255, \"cylinder\");\n        m_visualizer.addPointCloud(cloud->cloud, m_cloudKey);\n        m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, m_cloudKey);\n        m_visualizer.addPointCloud(eyes, m_landmarksKey);\n        m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 25, 0, 0, m_landmarksKey);\n        m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, m_landmarksKey);\n        m_visualizer.addText(anglesText, 25, 175, 17, 1, 1, 1, \"anglesText\");\n\t\t//m_visualizer.addText(\"Facial expression:\", 25, 100, 17, 1, 1, 1, \"anglesTextCaption\");\n\t\tm_visualizer.setBackgroundColor(0, 0, 0);\n        if (m_saveScreenshots)\n        {\n            if (boost::filesystem::exists(\"camera.cam\"))\n            {\n                pcl::visualization::Camera camera;\n                LoadCamera(camera, \"camera.cam\");\n                m_visualizer.setCameraParameters(camera);\n            }\n            else\n            {\n                while (m_visualizer.wasStopped() == false)\n                {\n                    m_visualizer.spinOnce(100);\n                }\n                std::vector<pcl::visualization::Camera> cameras;\n                m_visualizer.getCameras(cameras);\n                pcl::visualization::Camera camera = cameras[0];\n                SaveCamera(camera, \"camera.cam\");\n            }\n        }\n    }\n    else\n    {\n        m_visualizer.removeShape(\"cylinder\");\n        m_visualizer.addCylinder(cylinderCoefficients);\n        m_visualizer.removeShape(\"anglesText\");\n        m_visualizer.addText(anglesText, 25, 175, 17, 1, 1, 1, \"anglesText\");\n        m_visualizer.setShapeRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, ((float)0x73) / 255, ((float)0x3C) / 255, \"cylinder\");\n        m_visualizer.updatePointCloud(cloud->cloud, m_cloudKey);\n        m_visualizer.updatePointCloud(eyes, m_landmarksKey);\n\n   //     if (m_haveFerData)\n   //     {\n   //         int delta = 19;\n\t\t\t//int x = 20;\n\t\t\t//int y = 140 - delta;\n\n   //         auto recognizedExpression = std::max_element(m_ferData.begin(), m_ferData.end());\n   //         int recognizedExpressionIndex = std::distance(m_ferData.begin(), recognizedExpression);\n\n   //         for (int i = 0; i < m_ferData.size(); i++)\n   //         {\n   //             double colorScale = i == recognizedExpressionIndex ? 2 : 1;\n   //             std::string message = (boost::format(\"  %s %f\") % m_expressionLabels[i] % m_ferData[i]).str();\n   //             std::string textObjectLabel = (boost::format(\"ferText%1%\") % i).str();\n   //             m_visualizer.removeShape(textObjectLabel);\n\t\t\t//\tm_visualizer.addText(message, x, y, 17, (2 - colorScale) * 0.5, (2 - colorScale) * 0.5, colorScale * 0.5, textObjectLabel);\n   //             y -= delta;\n   //         }\n   //     }\n\n\t\tif (m_haveFerData)\n\t\t{\n\t\t\tint x = 20;\n\t\t\tint y = 75;\n\t\t\tint delta = 16;\n\n\t\t\tauto recognizedExpression = std::max_element(m_ferData.begin(), m_ferData.end());\n\t\t\tint recognizedExpressionIndex = std::distance(m_ferData.begin(), recognizedExpression);\n\n\t\t\tfor (int i = 0; i < m_ferData.size(); i++)\n\t\t\t{\n\t\t\t\tdouble colorScale = i == recognizedExpressionIndex ? 2 : 1;\n\t\t\t\tstd::string message = (boost::format(\"  %1%: %2%\") % m_expressionLabels[i] % m_ferData[i]).str();\n\t\t\t\tstd::string textObjectLabel = (boost::format(\"ferText%1%\") % i).str();\n\t\t\t\tm_visualizer.removeShape(textObjectLabel);\n\t\t\t\tm_visualizer.addText(message, x, y, 17, colorScale * 0.5, colorScale * 0.5, colorScale * 0.5, textObjectLabel);\n\t\t\t\ty += delta;\n\t\t\t}\n\t\t}\n    }\n\n    m_visualizer.spinOnce(1);\n\n    if (m_saveScreenshots)\n    {\n        PCDGrabber::CloudFileInfo::Ptr fileInfo = dataStorage->GetAndCast<PCDGrabber::CloudFileInfo>(\"FileInfo\");\n        if (fileInfo.get() != nullptr)\n        {\n            std::string screenshotPath = boost::replace_all_copy(fileInfo->Filename, \".pcd\", \".png\");\n            m_visualizer.saveScreenshot(screenshotPath);\n        }\n    }\n}\n\nEigen::Vector3f VisuilizeProcessor::VectorToEulerAngles(Eigen::Vector3f v)\n{\n    Eigen::Vector3f normed = v / v.norm();\n    float x = normed(0);\n    float y = normed(1);\n    float z = normed(2);\n\n    Eigen::Vector3f result;\n    result(0) = std::atan2(z, x) - M_PI / 2; // yaw\n    result(1) = -std::atan2(y, z); // tilt\n    result(2) = std::atan(y / x); // roll\n\n    return result * 180 / M_PI;\n\n}\n\nvoid VisuilizeProcessor::SaveCamera(pcl::visualization::Camera &camera, std::string file)\n{\n    std::ofstream str(file);\n\n    str << camera.clip[0] << std::endl;\n    str << camera.clip[1] << std::endl;\n    str << camera.focal[0] << std::endl;\n    str << camera.focal[1] << std::endl;\n    str << camera.focal[2] << std::endl;\n    str << camera.fovy << std::endl;\n    str << camera.pos[0] << std::endl;\n    str << camera.pos[1] << std::endl;\n    str << camera.pos[2] << std::endl;\n    str << camera.view[0] << std::endl;\n    str << camera.view[1] << std::endl;\n    str << camera.view[2] << std::endl;\n    str << camera.window_pos[0] << std::endl;\n    str << camera.window_pos[1] << std::endl;\n    str << camera.window_size[0] << std::endl;\n    str << camera.window_size[1] << std::endl;\n}\n\nvoid VisuilizeProcessor::LoadCamera(pcl::visualization::Camera &camera, std::string file)\n{\n    std::ifstream str(file);\n\n    str >> camera.clip[0];\n    str >> camera.clip[1];\n    str >> camera.focal[0];\n    str >> camera.focal[1];\n    str >> camera.focal[2];\n    str >> camera.fovy;\n    str >> camera.pos[0];\n    str >> camera.pos[1];\n    str >> camera.pos[2];\n    str >> camera.view[0];\n    str >> camera.view[1];\n    str >> camera.view[2];\n    str >> camera.window_pos[0];\n    str >> camera.window_pos[1];\n    str >> camera.window_size[0];\n    str >> camera.window_size[1];\n}\n\nvoid VisuilizeProcessor::HandleFER(std::vector<double> ferdata)\n{\n    m_haveFerData = true;\n    m_ferData = ferdata;\n}\n\nvoid VisuilizeProcessor::InitExpressionLabels()\n{\n    m_expressionLabels.clear();\n    m_expressionLabels.push_back(\"Neutral\");\n    m_expressionLabels.push_back(\"Happy\");\n    m_expressionLabels.push_back(\"Surprise\");\n}\n\nvoid VisuilizeProcessor::KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender)\n{\n    if (event.getKeySym() == \"space\")\n    {\n        VisuilizeProcessor *_this = static_cast<VisuilizeProcessor *>(sender);\n        if (_this->m_trackingProcessor.get() != nullptr)\n        {\n            _this->m_trackingProcessor->ResetTracker();\n        }\n    }\n}\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/VisuilizeProcessor.h",
    "content": "#pragma once\n\n#ifndef VISUALIZEPROCESSOR_H\n#define VISUALIZEPROCESSOR_H\n\n#include <Processor/IProcessor.h>\n#include <Processor/TrackingProcessor.h>\n#include <pcl/visualization/pcl_visualizer.h>\n\nclass VisuilizeProcessor : public hpe::IProcessor\n{\n    public:\n        VisuilizeProcessor(std::string cloudKey = \"Cloud\");\n        VisuilizeProcessor(bool saveScreenshots);\n        ~VisuilizeProcessor(void);\n\n        void Process(hpe::IDataStorage::Ptr dataStorage);\n\n        void HandleFER(std::vector<double> ferdata);\n\n        void SetTrackingProcessor(std::shared_ptr<hpe::TrackingProcessor> processor)\n        {\n            m_trackingProcessor = processor;\n        }\n\n    private:\n        pcl::visualization::PCLVisualizer m_visualizer;\n        bool m_first;\n        bool m_saveScreenshots;\n        bool m_haveFerData;\n        std::vector<double> m_ferData;\n        std::string m_cloudKey;\n        std::string m_landmarksKey;\n        std::vector<std::string> m_expressionLabels;\n\n        std::shared_ptr<hpe::TrackingProcessor> m_trackingProcessor;\n\n        static void KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender);\n\n        Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v);\n\n        void SaveCamera(pcl::visualization::Camera &camera, std::string file);\n        void LoadCamera(pcl::visualization::Camera &camera, std::string file);\n\n        void InitExpressionLabels();\n};\n\n#endif"
  },
  {
    "path": "FaceCept3D/TemplateTracker/VoxelizeProcessor.cpp",
    "content": "#include \"stdafx.h\"\n#include \"VoxelizeProcessor.h\"\n\n#include <Filter/Filters.h>\n#include <DataObject/CloudXYZRGBA.h>\n\nVoxelizeProcessor::VoxelizeProcessor(float size)\n    : m_size(size)\n{\n}\n\n\nVoxelizeProcessor::~VoxelizeProcessor(void)\n{\n}\n\nvoid VoxelizeProcessor::Process(hpe::IDataStorage::Ptr dataStorage)\n{\n    using namespace hpe;\n    CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCastNotNull<CloudXYZRGBA>(\"Cloud\", \"VoxelizeProcessor::Process - Cloud is null\");\n    cloudObject->cloud = PassThrough<pcl::PointXYZRGBA>(cloudObject->cloud, 0, 1.5, \"z\", false);\n\n    cloudObject->cloud = Voxelize<pcl::PointXYZRGBA>(cloudObject->cloud, m_size);\n}\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/VoxelizeProcessor.h",
    "content": "#pragma once\n\n#include <Processor/IProcessor.h>\n\nclass VoxelizeProcessor : public hpe::IProcessor\n{\n    public:\n        VoxelizeProcessor(float size = 0.005f);\n        ~VoxelizeProcessor(void);\n\n        void Process(hpe::IDataStorage::Ptr dataStorage);\n    private:\n        float m_size;\n};\n\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/settings.ini",
    "content": "DistanceToShow=1.2\nStoreFolder=d:\\Documents\\Image Databases\\video\n\nDetectorData=d:\\Projects\\HeadPoseEstimation\\Data\\DetectorDataDali\nFERData=d:\\Projects\\HeadPoseEstimation\\Data\\FERDataDali\n\nTemplate=d:\\Projects\\HeadPoseEstimation\\Data\\sergey.pcd\nLandmarks=landmarks.bnd\n\nShapePredictor=d:\\Projects\\HeadPoseEstimation\\Data\\shape_predictor_68_face_landmarks.dat\n\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/stdafx.cpp",
    "content": "// stdafx.cpp : source file that includes just the standard includes\n// TemplateTracker.pch will be the pre-compiled header\n// stdafx.obj will contain the pre-compiled type information\n\n#include \"stdafx.h\"\n\n// TODO: reference any additional headers you need in STDAFX.H\n// and not in this file\n"
  },
  {
    "path": "FaceCept3D/TemplateTracker/stdafx.h",
    "content": "// stdafx.h : include file for standard system include files,\n// or project specific include files that are used frequently, but\n// are changed infrequently\n//\n\n#pragma once\n\n#include <stdio.h>\n\n#include <limits>\n\n#include <Settings/HPESettings.h>\n\n#include <Grabber/PCDGrabber.h>\n#include <Processor/DepthPreprocessingProcessor.h>\n#include <Processor/TrackingProcessor.h>\n#include <Processor/ShowCloudProcessor.h>\n#include <Processor/SaveCloudProcessor.h>\n#include <Processor/FacialExpressionsProcessor.h>\n\n#include <pcl/io/pcd_io.h>\n\n#ifndef __linux__\n\t#include <objbase.h>\n\t#include <WindowsOnly/Grabber/KinectSDKGrabber.h>\n\t#include <WindowsOnly/Processor/KinectSDKConverterProcessor.h>\n#else\n\t#include <Grabber/OpenNIGrabber.h>\n#endif"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\r\n\r\nCopyright (c) 2015 sergeytulyakov\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the \"Software\"), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\nSOFTWARE.\r\n\r\n"
  },
  {
    "path": "README.md",
    "content": "# FaceCept3D\r\nFaceCept3D: 3D Face Analysis and Recognition\r\n\r\n## Introduction\r\n\r\nFaceCept3D is a realtime framework for 3D face analysis and recognition. It contains a set of extendible components that can be combined to fulfil a specific task. At this step we open source the following functionality:\r\n\r\n1. Person-specific template creation\r\n2. Extreme head pose estimation\r\n3. Facial expression analysis\r\n\r\nFaceCept3D is based on the following works:\r\n\r\n* Robust Real-Time Extreme Head Pose Estimation. [[pdf](https://github.com/sergeytulyakov/FaceCept3D/blob/master/papers/Robust%20Real-Time%20Extreme%20Head%20Pose%20Estimation.pdf)]\r\n\r\n* Facial expression recognition under a wide range of head poses. [[pdf](https://github.com/sergeytulyakov/FaceCept3D/blob/master/papers/Facial%20Expression%20Recognition%20under%20a%20Wide%20Range%20of%20Head%20Poses.pdf)]\r\n\r\nHere is an example image of head pose estimation using FaceCept3D.\r\n\r\n![Head Pose Estimation Examples](https://github.com/sergeytulyakov/FaceCept3D/blob/master/docs/home/HalfHeadPoseEstimationExamples.jpg)\r\n\r\nSee FaceCept3D [wiki](https://github.com/sergeytulyakov/FaceCept3D/wiki) for more details.\r\n\r\n## Requirements\r\n\r\nTo install FaceCept3D you need to have the following libraries available\r\n\r\n* [Point Cloud Library PCL](http://www.pointclouds.org/) version 1.7\r\n* [OpenCV](http://opencv.org/) >= version 2.4\r\n* [Qt](http://download.qt.io/archive/qt/4.8/4.8.6/) version 4.8 (required by VTK 5.6)\r\n\r\nand their dependencies.\r\n\r\nIn addition, if you plan to use a depth sensor, you will to install the driver. Currently we support only the Microsoft Kinect 1.0 sensor. However, the code can be easily extended to most of the available RGB-D sensors.\r\n\r\nTo use MS Kinect sensors:\r\n\r\n* On windows install their [SDK (v 1.8)](http://www.microsoft.com/en-us/download/details.aspx?id=40278)\r\n\r\n## Installation\r\n\r\nFaceCept3D was tested on Windows and Linux. Mac users should follow linux installation instructions.\r\n\r\n### Linux\r\n\r\n* **OpenNI and SensorKin drivers**. A great guide is [here](https://bitbucket.org/samirmenon/scl-manips-v2/wiki/vision/kinect).\r\n\r\n* **Prerequsities**. Install the following libraries using your package manager:\r\n\r\n  ```Shell\r\n  boost, eigen, flann, vtk (v 5.6), qhull, opencv, tbb, qt (v 4.8)\r\n  ```\r\n\r\n\tFor Ubuntu 14.04 the following command will do the trick:\r\n\r\n\t```Shell\r\n\tsudo apt-get install libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libqhull-dev libopencv-dev libtbb-dev libqt4-dev\r\n\t```\r\n\r\n\tIn addition, you need to install cmake if you don't have it:\r\n\r\n\t```Shell\r\n\tsudo apt-get install cmake\r\n\t```\r\n\r\n* **Install PCL**. We need to build and install pcl, since the version [here](http://pointclouds.org/downloads/linux.html) is built without *-std=c++11* modifier. This [guide](http://pointclouds.org/downloads/source.html) will help you build pcl from source.\r\n\r\n* **Check that everything works**. Try running pcl_openni_viewier. If you don't see any output, then there is something wrong with your installation.\r\n\r\n* **Build FaceCept3D**. \r\n\r\n  ```Shell\r\n  git clone https://github.com/sergeytulyakov/FaceCept3D.git\r\n  cd FaceCept3D\r\n  mkdir build\r\n  cd build\r\n  cmake ..\r\n  make\r\n  ```\r\n\r\n### Windows\r\n\r\nOn Windows platform there are two ways of build FaceCept3D:\r\n\r\n1. Using CMake to generate an *.sln file\r\n2. Using the provided *.sln file.\r\n\r\nIt is somewhat more convenient to use the latter variant since the final solution file is cleaner than the one generated by CMake. If you still want to use CMake, you will need to manually enter all necessary paths.\r\n\r\nTo simplify the process we provide all the dependencies in a binary format. All libs were compiled using msvc2010-win64. You can still use it in a latter version of Visual Studio, but you need to have msvc2010 toolchain installed. \r\n\r\n* **Install Microsoft Kinect Driver 1.8**.\r\n\tDownload it [here](http://www.microsoft.com/en-us/download/details.aspx?id=40278) and install.\r\n\t\r\n* **Download and unpack precompiled dependencies**.\r\n\tDownload them [here](http://sourceforge.net/projects/facecept3d-3rdparty/files/msvc2010-win64/)\r\n\t\r\n* **Set up environmental variables**\r\n\tYou may find useful [Rapid Environment Editor](http://www.rapidee.com/en/download) for setting the environment on Windows.\r\n\t\r\n\tSet KINECTSDK10_DIR and FACECEPT3D_3RDPARTY_DIR. For example, mine are pointing to\r\n\t\r\n\t```Shell\r\n\tKINECTSDK10_DIR=c:\\Program Files\\Microsoft SDKs\\Kinect\\v1.8\r\n\tFACECEPT3D_3RDPARTY_DIR=d:\\Dependencies\\FaceCept3D-3rdparty\\\r\n\t```\r\n\t\r\n\tYou also need to add the following folders to your path:\r\n\t\r\n\t```Shell\r\n\t%FACECEPT3D_3RDPARTY_DIR%\\Qt\\bin\r\n\t%FACECEPT3D_3RDPARTY_DIR%\\PCL\\bin\r\n\t%FACECEPT3D_3RDPARTY_DIR%\\Opencv\\bin\r\n\t%FACECEPT3D_3RDPARTY_DIR%\\FLANN\\bin\r\n\t%FACECEPT3D_3RDPARTY_DIR%\\QHull\\bin\t\r\n\t```\r\n\t\r\n\tThe rest is configured in the Project Properties files. If something doesn't work feel free to inspect the *.props files in the Config folder. If you want to use your versions of the dependencies, you will have to change the *.props files. \r\n\t\r\n\tOpen 3dheadposeestimation.sln using your version of Visual Studio and try to build. If you have a newer Visual Studio than 2010, **do not agree to convert the solution to a newer toolchain**, since the dependencies are built with msvc2010-win64 it will not link.\r\n\r\n### What to do next\r\n\r\nHave a look at our [wiki](https://github.com/sergeytulyakov/FaceCept3D/wiki) pages:\r\n\r\n1. [Getting Started](https://github.com/sergeytulyakov/FaceCept3D/wiki/1.-Gettings-Started)\r\n2. [Point Cloud Registration](https://github.com/sergeytulyakov/FaceCept3D/wiki/2.-Point-Cloud-Registration)\r\n3. [Personalized Template Creation with FaceCept3D](https://github.com/sergeytulyakov/FaceCept3D/wiki/3.-Personalized-Template-Creation)\r\n\r\n## Citing FaceCept3D\r\n\r\nIf you use FaceCept3D in your research, please cite one of the following (or both):\r\n\r\n\t@inproceedings{Tulyakov2014,\r\n\t\tauthor = {Tulyakov, S. and Vieriu, R. L. and Semeniuta, S. and Sebe, N.},\r\n\t\tbooktitle = {International Conference on Pattern Recognition},\r\n\t\ttitle = {{Robust Real-Time Extreme Head Pose Estimation}},\r\n\t\tyear = {2014},\r\n\t}\r\n\r\n\t@inproceedings{Vieriu2015,\r\n\t\tauthor = {Vieriu, R. L. and Tulyakov, S. and Sangineto, E. and Semeniuta, S. and Sebe, N.},\r\n\t\tbooktitle = {Automatic Face and Gesture Recognition},\r\n\t\ttitle = {{Facial Expression Recognition under a Wide Range of Head Poses}},\r\n\t\tyear = {2015},\r\n\t}\r\n\r\n"
  }
]