Repository: sergeytulyakov/FaceCept3D Branch: master Commit: f13c791af154 Files: 128 Total size: 331.9 KB Directory structure: gitextract_pxarii44/ ├── .gitignore ├── FaceCept3D/ │ ├── 3dheadposeestimation.sln │ ├── CMakeLists.txt │ ├── Config/ │ │ ├── CommonProperties.props │ │ ├── CommonPropertiesLibsDebug.props │ │ └── CommonPropertiesLibsRelease.props │ ├── HeadPoseEstimationFramework/ │ │ ├── Align/ │ │ │ ├── CloudMapper.cpp │ │ │ ├── CloudMapper.h │ │ │ ├── IHeadTemplateCreator.h │ │ │ ├── IncrementalHeadTemplateCreator.cpp │ │ │ ├── IncrementalHeadTemplateCreator.h │ │ │ ├── TransformationEstimationHPE.cpp │ │ │ └── TransformationEstimationHPE.h │ │ ├── CMakeLists.txt │ │ ├── Common.h │ │ ├── Converter/ │ │ │ ├── IDataConverter.h │ │ │ ├── KinectDataConverter.cpp │ │ │ └── KinectDataConverter.h │ │ ├── DataFilter/ │ │ │ └── IDataFilter.h │ │ ├── DataObject/ │ │ │ ├── CloudXYZRGBA.h │ │ │ ├── HeadPoseInfo.h │ │ │ ├── IDataObject.h │ │ │ ├── IDataStorage.h │ │ │ ├── LandmarksObject.h │ │ │ ├── MapDataStorage.cpp │ │ │ ├── MapDataStorage.h │ │ │ └── RawFrames.h │ │ ├── Detector/ │ │ │ └── DepthFrameDetector/ │ │ │ ├── DepthDetector.cpp │ │ │ ├── DepthDetector.h │ │ │ ├── LeafNode.cpp │ │ │ ├── LeafNode.h │ │ │ ├── Tree.cpp │ │ │ ├── Tree.h │ │ │ ├── csvMat.cpp │ │ │ ├── csvMat.h │ │ │ ├── localFunctions.cpp │ │ │ └── localFunctions.h │ │ ├── Exception/ │ │ │ ├── HPEException.cpp │ │ │ └── HPEException.h │ │ ├── FacialExpressions/ │ │ │ ├── ferLocalFunctions.cpp │ │ │ └── ferLocalFunctions.h │ │ ├── Features/ │ │ │ ├── Calculator/ │ │ │ │ ├── CylinderOptimizedFeatureCalculator.cpp │ │ │ │ ├── CylinderOptimizedFeatureCalculator.h │ │ │ │ ├── FeatureCalculatorBase.h │ │ │ │ └── IFeatureCalculator.h │ │ │ ├── IFeature.h │ │ │ └── Sampler/ │ │ │ ├── CylinderSampler.cpp │ │ │ ├── CylinderSampler.h │ │ │ └── IPatchSampler.h │ │ ├── Filter/ │ │ │ ├── BoxFilter.h │ │ │ ├── CenteredBoxFilter.h │ │ │ ├── FilteringQueue.h │ │ │ ├── Filters.h │ │ │ ├── FunctorFilter.h │ │ │ ├── ICenterCloudFilter.h │ │ │ ├── ICloudFilter.h │ │ │ └── MovingLeastSquaresFilter.h │ │ ├── Grabber/ │ │ │ ├── GrabberBase.cpp │ │ │ ├── GrabberBase.h │ │ │ ├── OpenNIGrabber.cpp │ │ │ ├── OpenNIGrabber.h │ │ │ ├── PCDGrabber.cpp │ │ │ ├── PCDGrabber.h │ │ │ ├── ProviderGrabber.cpp │ │ │ └── ProviderGrabber.h │ │ ├── HeadPoseEstimationFramework.vcxproj │ │ ├── Helpers/ │ │ │ ├── HpeHelpers.cpp │ │ │ └── HpeHelpers.h │ │ ├── Interface/ │ │ │ └── IInterface.h │ │ ├── Landmarks.h │ │ ├── Processor/ │ │ │ ├── ConverterProcessor.cpp │ │ │ ├── ConverterProcessor.h │ │ │ ├── DepthPreprocessingProcessor.cpp │ │ │ ├── DepthPreprocessingProcessor.h │ │ │ ├── DetectorProcessor.cpp │ │ │ ├── DetectorProcessor.h │ │ │ ├── FacialExpressionsProcessor.cpp │ │ │ ├── FacialExpressionsProcessor.h │ │ │ ├── FilterProcessor.cpp │ │ │ ├── FilterProcessor.h │ │ │ ├── HeadExtractionProcessor.cpp │ │ │ ├── HeadExtractionProcessor.h │ │ │ ├── IProcessor.h │ │ │ ├── SaveCloudProcessor.cpp │ │ │ ├── SaveCloudProcessor.h │ │ │ ├── ShowCloudProcessor.cpp │ │ │ ├── ShowCloudProcessor.h │ │ │ ├── TemplateCreatorProcessor.cpp │ │ │ ├── TemplateCreatorProcessor.h │ │ │ ├── TrackingProcessor.cpp │ │ │ └── TrackingProcessor.h │ │ ├── Provider/ │ │ │ ├── IDataProvider.h │ │ │ ├── RawDataProvider.cpp │ │ │ └── RawDataProvider.h │ │ ├── Settings/ │ │ │ ├── HPESettings.cpp │ │ │ └── HPESettings.h │ │ ├── Tracker/ │ │ │ └── ICPTemplateTracker.h │ │ ├── UI/ │ │ │ ├── PointPicker.h │ │ │ ├── ShowCloud.h │ │ │ ├── ShowMesh.h │ │ │ └── VisualizerCommon.h │ │ └── WindowsOnly/ │ │ ├── Grabber/ │ │ │ ├── KinectSDKGrabber.cpp │ │ │ └── KinectSDKGrabber.h │ │ └── Processor/ │ │ ├── KinectSDKConverterProcessor.cpp │ │ └── KinectSDKConverterProcessor.h │ ├── TemplateCreator/ │ │ ├── CMakeLists.txt │ │ ├── TemplateCreator.cpp │ │ ├── TemplateCreator.vcxproj │ │ ├── UIProcessor.cpp │ │ ├── UIProcessor.h │ │ ├── settings.ini │ │ ├── stdafx.cpp │ │ └── stdafx.h │ └── TemplateTracker/ │ ├── ShowTwoCloudsProcessor.cpp │ ├── ShowTwoCloudsProcessor.h │ ├── TemplateTracker.cpp │ ├── TemplateTracker.vcxproj │ ├── UIProcessor.cpp │ ├── UIProcessor.h │ ├── VisuilizeProcessor.cpp │ ├── VisuilizeProcessor.h │ ├── VoxelizeProcessor.cpp │ ├── VoxelizeProcessor.h │ ├── settings.ini │ ├── stdafx.cpp │ └── stdafx.h ├── LICENSE └── README.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ *.suo *.sdf *.user *.opensdf *.txt *.idea !CMakeLists.txt build ipch Debug Release x64 data/DetectorData data/FERData ================================================ FILE: FaceCept3D/3dheadposeestimation.sln ================================================  Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio 2013 VisualStudioVersion = 12.0.21005.1 MinimumVisualStudioVersion = 10.0.40219.1 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "HeadPoseEstimationFramework", "HeadPoseEstimationFramework\HeadPoseEstimationFramework.vcxproj", "{A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "TemplateCreator", "TemplateCreator\TemplateCreator.vcxproj", "{39EBEEC1-29C0-48BA-9858-E593CAEE7757}" ProjectSection(ProjectDependencies) = postProject {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} = {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "TemplateTracker", "TemplateTracker\TemplateTracker.vcxproj", "{68A9917E-A682-4013-8D30-389AB45A9E0D}" ProjectSection(ProjectDependencies) = postProject {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} = {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} EndProjectSection EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Mixed Platforms = Debug|Mixed Platforms Debug|Win32 = Debug|Win32 Debug|x64 = Debug|x64 Release|Mixed Platforms = Release|Mixed Platforms Release|Win32 = Release|Win32 Release|x64 = Release|x64 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|Mixed Platforms.ActiveCfg = Debug|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|Win32.ActiveCfg = Debug|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|x64.ActiveCfg = Debug|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Debug|x64.Build.0 = Debug|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|Mixed Platforms.ActiveCfg = Release|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|Win32.ActiveCfg = Release|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|x64.ActiveCfg = Release|x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD}.Release|x64.Build.0 = Release|x64 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Mixed Platforms.ActiveCfg = Debug|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Mixed Platforms.Build.0 = Debug|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Win32.ActiveCfg = Debug|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|Win32.Build.0 = Debug|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|x64.ActiveCfg = Debug|x64 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Debug|x64.Build.0 = Debug|x64 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Mixed Platforms.ActiveCfg = Release|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Mixed Platforms.Build.0 = Release|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Win32.ActiveCfg = Release|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|Win32.Build.0 = Release|Win32 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|x64.ActiveCfg = Release|x64 {39EBEEC1-29C0-48BA-9858-E593CAEE7757}.Release|x64.Build.0 = Release|x64 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Mixed Platforms.ActiveCfg = Debug|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Mixed Platforms.Build.0 = Debug|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Win32.ActiveCfg = Debug|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|Win32.Build.0 = Debug|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|x64.ActiveCfg = Debug|x64 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Debug|x64.Build.0 = Debug|x64 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Mixed Platforms.ActiveCfg = Release|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Mixed Platforms.Build.0 = Release|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Win32.ActiveCfg = Release|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|Win32.Build.0 = Release|Win32 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|x64.ActiveCfg = Release|x64 {68A9917E-A682-4013-8D30-389AB45A9E0D}.Release|x64.Build.0 = Release|x64 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection EndGlobal ================================================ FILE: FaceCept3D/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 2.8) add_subdirectory(HeadPoseEstimationFramework) add_subdirectory(TemplateCreator) ================================================ FILE: FaceCept3D/Config/CommonProperties.props ================================================  $(KINECTSDK10_DIR)/inc;$(FACECEPT3D_3RDPARTY_DIR)\Boost\include;$(FACECEPT3D_3RDPARTY_DIR)\Eigen\include;$(FACECEPT3D_3RDPARTY_DIR)\FLANN\include;$(FACECEPT3D_3RDPARTY_DIR)\Opencv\include;$(FACECEPT3D_3RDPARTY_DIR)\PCL\include;$(FACECEPT3D_3RDPARTY_DIR)\Qhull\include;$(FACECEPT3D_3RDPARTY_DIR)\Qt\include;$(FACECEPT3D_3RDPARTY_DIR)\VTK\include\vtk-5.8;$(ProjectDir);$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSdkDir)include;$(FrameworkSDKDir)\include <_PropertySheetDisplayName>CommonPropertiesPaths /Zm500 %(AdditionalOptions) ================================================ FILE: FaceCept3D/Config/CommonPropertiesLibsDebug.props ================================================  $(KINECTSDK10_DIR)\lib\amd64;$(FACECEPT3D_3RDPARTY_DIR)\Boost\lib;$(FACECEPT3D_3RDPARTY_DIR)\FLANN\lib;$(FACECEPT3D_3RDPARTY_DIR)\Opencv\lib;$(FACECEPT3D_3RDPARTY_DIR)\PCL\lib;$(FACECEPT3D_3RDPARTY_DIR)\Qhull\lib;$(FACECEPT3D_3RDPARTY_DIR)\Qt\lib;$(FACECEPT3D_3RDPARTY_DIR)\VTK\lib\vtk-5.8;$(VCInstallDir)lib\amd64;$(VCInstallDir)atlmfc\lib\amd64;$(WindowsSdkDir)lib\x64 HeadPoseEstimationFramework_Debug.lib;opencv_calib3d2410d.lib;opencv_contrib2410d.lib;opencv_core2410d.lib;opencv_features2d2410d.lib;opencv_flann2410d.lib;opencv_gpu2410d.lib;opencv_highgui2410d.lib;opencv_imgproc2410d.lib;opencv_legacy2410d.lib;opencv_ml2410d.lib;opencv_nonfree2410d.lib;opencv_objdetect2410d.lib;opencv_ocl2410d.lib;opencv_photo2410d.lib;opencv_stitching2410d.lib;opencv_superres2410d.lib;opencv_ts2410d.lib;opencv_video2410d.lib;opencv_videostab2410d.lib;Kinect10.lib;opengl32.lib;qhullstatic_d.lib;pcl_common_debug.lib;pcl_features_debug.lib;pcl_filters_debug.lib;pcl_io_debug.lib;pcl_io_ply_debug.lib;pcl_kdtree_debug.lib;pcl_keypoints_debug.lib;pcl_octree_debug.lib;pcl_outofcore_debug.lib;pcl_people_debug.lib;pcl_recognition_debug.lib;pcl_registration_debug.lib;pcl_sample_consensus_debug.lib;pcl_search_debug.lib;pcl_segmentation_debug.lib;pcl_surface_debug.lib;pcl_tracking_debug.lib;pcl_visualization_debug.lib;flann_cpp_s-gd.lib;libboost_atomic-vc100-mt-gd-1_55.lib;libboost_chrono-vc100-mt-gd-1_55.lib;libboost_context-vc100-mt-gd-1_55.lib;libboost_coroutine-vc100-mt-gd-1_55.lib;libboost_date_time-vc100-mt-gd-1_55.lib;libboost_exception-vc100-mt-gd-1_55.lib;libboost_filesystem-vc100-mt-gd-1_55.lib;libboost_graph-vc100-mt-gd-1_55.lib;libboost_iostreams-vc100-mt-gd-1_55.lib;libboost_locale-vc100-mt-gd-1_55.lib;libboost_log_setup-vc100-mt-gd-1_55.lib;libboost_log-vc100-mt-gd-1_55.lib;libboost_math_c99f-vc100-mt-gd-1_55.lib;libboost_math_c99l-vc100-mt-gd-1_55.lib;libboost_math_c99-vc100-mt-gd-1_55.lib;libboost_math_tr1f-vc100-mt-gd-1_55.lib;libboost_math_tr1l-vc100-mt-gd-1_55.lib;libboost_math_tr1-vc100-mt-gd-1_55.lib;libboost_prg_exec_monitor-vc100-mt-gd-1_55.lib;libboost_program_options-vc100-mt-gd-1_55.lib;libboost_random-vc100-mt-gd-1_55.lib;libboost_regex-vc100-mt-gd-1_55.lib;libboost_serialization-vc100-mt-gd-1_55.lib;libboost_signals-vc100-mt-gd-1_55.lib;libboost_system-vc100-mt-gd-1_55.lib;libboost_test_exec_monitor-vc100-mt-gd-1_55.lib;libboost_thread-vc100-mt-gd-1_55.lib;libboost_timer-vc100-mt-gd-1_55.lib;libboost_unit_test_framework-vc100-mt-gd-1_55.lib;libboost_wave-vc100-mt-gd-1_55.lib;libboost_wserialization-vc100-mt-gd-1_55.lib;phonond4.lib;QAxContainerd.lib;QAxServerd.lib;Qt3Supportd4.lib;QtCLucened4.lib;QtCored4.lib;QtDeclaratived4.lib;QtDesignerComponentsd4.lib;QtDesignerd4.lib;QtGuid4.lib;QtHelpd4.lib;qtmaind.lib;QtMultimediad4.lib;QtNetworkd4.lib;QtOpenGLd4.lib;QtScriptd4.lib;QtScriptToolsd4.lib;QtSqld4.lib;QtSvgd4.lib;QtTestd4.lib;QtUiToolsd.lib;QtWebKitd4.lib;QtXmld4.lib;QtXmlPatternsd4.lib;MapReduceMPI-gd.lib;mpistubs-gd.lib;QVTK-gd.lib;vtkalglib-gd.lib;vtkCharts-gd.lib;vtkCommon-gd.lib;vtkDICOMParser-gd.lib;vtkexoIIc-gd.lib;vtkexpat-gd.lib;vtkFiltering-gd.lib;vtkfreetype-gd.lib;vtkftgl-gd.lib;vtkGenericFiltering-gd.lib;vtkGeovis-gd.lib;vtkGraphics-gd.lib;vtkhdf5-gd.lib;vtkHybrid-gd.lib;vtkImaging-gd.lib;vtkInfovis-gd.lib;vtkIO-gd.lib;vtkjpeg-gd.lib;vtklibxml2-gd.lib;vtkmetaio-gd.lib;vtkNetCDF-gd.lib;vtkNetCDF_cxx-gd.lib;vtkpng-gd.lib;vtkproj4-gd.lib;vtkRendering-gd.lib;vtksqlite-gd.lib;vtksys-gd.lib;vtktiff-gd.lib;vtkverdict-gd.lib;vtkViews-gd.lib;vtkVolumeRendering-gd.lib;vtkWidgets-gd.lib;vtkzlib-gd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) ================================================ FILE: FaceCept3D/Config/CommonPropertiesLibsRelease.props ================================================  $(KINECTSDK10_DIR)\lib\amd64;$(FACECEPT3D_3RDPARTY_DIR)\Boost\lib;$(FACECEPT3D_3RDPARTY_DIR)\FLANN\lib;$(FACECEPT3D_3RDPARTY_DIR)\Opencv\lib;$(FACECEPT3D_3RDPARTY_DIR)\PCL\lib;$(FACECEPT3D_3RDPARTY_DIR)\Qhull\lib;$(FACECEPT3D_3RDPARTY_DIR)\Qt\lib;$(FACECEPT3D_3RDPARTY_DIR)\VTK\lib\vtk-5.8;$(VCInstallDir)lib\amd64;$(VCInstallDir)atlmfc\lib\amd64;$(WindowsSdkDir)lib\x64 HeadPoseEstimationFramework_Release.lib;Kinect10.lib;opengl32.lib;MapReduceMPI.lib;mpistubs.lib;QVTK.lib;vtkalglib.lib;vtkCharts.lib;vtkCommon.lib;vtkDICOMParser.lib;vtkexoIIc.lib;vtkexpat.lib;vtkFiltering.lib;vtkfreetype.lib;vtkftgl.lib;vtkGenericFiltering.lib;vtkGeovis.lib;vtkGraphics.lib;vtkhdf5.lib;vtkHybrid.lib;vtkImaging.lib;vtkInfovis.lib;vtkIO.lib;vtkjpeg.lib;vtklibxml2.lib;vtkmetaio.lib;vtkNetCDF.lib;vtkNetCDF_cxx.lib;vtkpng.lib;vtkproj4.lib;vtkRendering.lib;vtksqlite.lib;vtksys.lib;vtktiff.lib;vtkverdict.lib;vtkViews.lib;vtkVolumeRendering.lib;vtkWidgets.lib;vtkzlib.lib;opencv_calib3d2410.lib;opencv_contrib2410.lib;opencv_core2410.lib;opencv_features2d2410.lib;opencv_flann2410.lib;opencv_gpu2410.lib;opencv_highgui2410.lib;opencv_imgproc2410.lib;opencv_legacy2410.lib;opencv_ml2410.lib;opencv_nonfree2410.lib;opencv_objdetect2410.lib;opencv_ocl2410.lib;opencv_photo2410.lib;opencv_stitching2410.lib;opencv_superres2410.lib;opencv_ts2410.lib;opencv_video2410.lib;opencv_videostab2410.lib;qhullstatic.lib;pcl_common_release.lib;pcl_features_release.lib;pcl_filters_release.lib;pcl_io_ply_release.lib;pcl_io_release.lib;pcl_kdtree_release.lib;pcl_keypoints_release.lib;pcl_octree_release.lib;pcl_outofcore_release.lib;pcl_people_release.lib;pcl_recognition_release.lib;pcl_registration_release.lib;pcl_sample_consensus_release.lib;pcl_search_release.lib;pcl_segmentation_release.lib;pcl_surface_release.lib;pcl_tracking_release.lib;pcl_visualization_release.lib;flann_cpp_s.lib;libboost_atomic-vc100-mt-1_55.lib;libboost_chrono-vc100-mt-1_55.lib;libboost_context-vc100-mt-1_55.lib;libboost_coroutine-vc100-mt-1_55.lib;libboost_date_time-vc100-mt-1_55.lib;libboost_exception-vc100-mt-1_55.lib;libboost_filesystem-vc100-mt-1_55.lib;libboost_graph-vc100-mt-1_55.lib;libboost_iostreams-vc100-mt-1_55.lib;libboost_locale-vc100-mt-1_55.lib;libboost_log-vc100-mt-1_55.lib;libboost_log_setup-vc100-mt-1_55.lib;libboost_math_c99-vc100-mt-1_55.lib;libboost_math_c99f-vc100-mt-1_55.lib;libboost_math_c99l-vc100-mt-1_55.lib;libboost_math_tr1-vc100-mt-1_55.lib;libboost_math_tr1f-vc100-mt-1_55.lib;libboost_math_tr1l-vc100-mt-1_55.lib;libboost_prg_exec_monitor-vc100-mt-1_55.lib;libboost_program_options-vc100-mt-1_55.lib;libboost_random-vc100-mt-1_55.lib;libboost_regex-vc100-mt-1_55.lib;libboost_serialization-vc100-mt-1_55.lib;libboost_signals-vc100-mt-1_55.lib;libboost_system-vc100-mt-1_55.lib;libboost_test_exec_monitor-vc100-mt-1_55.lib;libboost_thread-vc100-mt-1_55.lib;libboost_timer-vc100-mt-1_55.lib;libboost_unit_test_framework-vc100-mt-1_55.lib;libboost_wave-vc100-mt-1_55.lib;libboost_wserialization-vc100-mt-1_55.lib;phonon4.lib;QAxContainer.lib;QAxServer.lib;Qt3Support4.lib;QtCLucene4.lib;QtCore4.lib;QtDeclarative4.lib;QtDesigner4.lib;QtDesignerComponents4.lib;QtGui4.lib;QtHelp4.lib;qtmain.lib;QtMultimedia4.lib;QtNetwork4.lib;QtOpenGL4.lib;QtScript4.lib;QtScriptTools4.lib;QtSql4.lib;QtSvg4.lib;QtTest4.lib;QtUiTools.lib;QtWebKit4.lib;QtXml4.lib;QtXmlPatterns4.lib;%(AdditionalDependencies) ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/CloudMapper.cpp ================================================ #include "CloudMapper.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "UI/ShowCloud.h" #include "Exception/HPEException.h" #include "Filter/Filters.h" #include "TransformationEstimationHPE.h" #include namespace hpe { float CalculateDistance(pcl::PointXYZRGBA &p1, pcl::PointXYZRGBA &p2) { Eigen::VectorXf d(3); d(0) = p1.x - p2.x; d(1) = p1.y - p2.y; d(2) = p1.z - p2.z; return d.norm(); } CloudMapper::CloudMapper() : m_icpParams(CloudMapper::ICPParams(0.05, 0.05, 0.000001, 100000, 100)), m_useDynamicICP(false), m_usePointToPlaneMetric(false), m_useNormalShooting(false), m_estimateScale(false) { } void CloudMapper::AddCloud(CloudMapper::Cloud::Ptr &cloud, CloudMapper::Landmarks &landmarks) { m_pointClouds.push_back(cloud); m_landmarks.push_back(landmarks); } CloudMapper::Clouds CloudMapper::GetAlignedClouds(TransformationsPtr transformations) { return AlignAll(m_pointClouds, m_landmarks); } CloudMapper::Cloud::Ptr CloudMapper::GetMergedCloud() { CloudMapper::Clouds alignedClouds = AlignAll(m_pointClouds, m_landmarks); CloudMapper::Cloud::Ptr merged(new CloudMapper::Cloud); auto it = alignedClouds.begin(); do { auto current = (*it); *merged += *current; ++it; } while (it != alignedClouds.end()); return merged; } CloudMapper::TransformationsPtr CloudMapper::GetTransforms() { CloudMapper::TransformationsPtr transforms(new CloudMapper::Transformations); AlignAll(m_pointClouds, m_landmarks, transforms); return transforms; } void CloudMapper::SetICPParams(CloudMapper::ICPParams params) { m_icpParams = params; } void CloudMapper::SetDynamicICPParams(CloudMapper::DynamicICPParams ¶ms) { m_dynamicICPParams = params; SetUseDynamicICP(true); } void CloudMapper::SetUseDynamicICP(bool use) { m_useDynamicICP = use; } void CloudMapper::SetUseNormalShooting(bool use) { m_useNormalShooting = use; } void CloudMapper::SetEstimateScale(bool estimateScale) { m_estimateScale = estimateScale; } void CloudMapper::SetUsePointToPlaneMetric(bool use) { m_usePointToPlaneMetric = use; } Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsPointToPlane(CloudMapper::Cloud::Ptr &source, CloudMapper::Cloud::Ptr &target, ICPParams params) { CloudMapper::NormalCloud::Ptr normalSource(new CloudMapper::NormalCloud); pcl::copyPointCloud(*source, *normalSource); CloudMapper::NormalCloud::Ptr normalTarget(new CloudMapper::NormalCloud); pcl::copyPointCloud(*target, *normalTarget); typedef pcl::search::KdTree Search2; Search2::Ptr search(new Search2); pcl::NormalEstimation estimation; estimation.setSearchMethod(search); estimation.setKSearch(10); estimation.setInputCloud(normalTarget); estimation.compute(*normalTarget); typedef pcl::registration::CorrespondenceRejectorSampleConsensus Rejector; pcl::Registration::Ptr registration(new pcl::IterativeClosestPoint); pcl::registration::CorrespondenceRejectorSampleConsensus::Ptr rejector(new pcl::registration::CorrespondenceRejectorSampleConsensus); rejector->setInlierThreshold(params.RansacRejectionThreshold); rejector->setMaximumIterations(params.MaxRANSACIterations); rejector->setInputSource(normalSource); rejector->setInputTarget(normalTarget); registration->setInputSource(normalSource); registration->setInputTarget(normalTarget); registration->setMaxCorrespondenceDistance(params.MaxCorrespondenceDistance); registration->setRANSACOutlierRejectionThreshold(params.RansacRejectionThreshold); registration->setTransformationEpsilon(params.TransformationEpsilon); registration->setMaximumIterations(params.MaxICPIterations); registration->setRANSACIterations(params.MaxRANSACIterations); registration->addCorrespondenceRejector(rejector); if (m_useNormalShooting) { typedef pcl::registration::CorrespondenceEstimationNormalShooting NormalShooting; NormalShooting::Ptr ce(new NormalShooting); registration->setCorrespondenceEstimation(ce); } typedef TransformationEstimationHPE Custom; Custom::Ptr metric(new Custom); metric->SetWeights(m_weights); registration->setTransformationEstimation(metric); CloudMapper::NormalCloud::Ptr c(new CloudMapper::NormalCloud); registration->align(*c); //std::cout << "Fitness Score " << registration->getFitnessScore() << std::endl; return registration->getFinalTransformation(); } Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsPointToPoint(CloudMapper::Cloud::Ptr &source, CloudMapper::Cloud::Ptr &target, ICPParams params) { pcl::Registration::Ptr registration(new pcl::IterativeClosestPoint); typedef pcl::registration::CorrespondenceRejectorSampleConsensus Rejector; Rejector::Ptr rejector(new Rejector); rejector->setInlierThreshold(params.RansacRejectionThreshold); rejector->setMaximumIterations(params.MaxRANSACIterations); rejector->setInputSource(source); rejector->setInputTarget(target); registration->setInputSource(source); registration->setInputTarget(target); registration->setMaxCorrespondenceDistance(params.MaxCorrespondenceDistance); registration->setRANSACOutlierRejectionThreshold(params.RansacRejectionThreshold); registration->setTransformationEpsilon(params.TransformationEpsilon); registration->setMaximumIterations(params.MaxICPIterations); registration->setRANSACIterations(params.MaxRANSACIterations); registration->addCorrespondenceRejector(rejector); CloudMapper::Cloud::Ptr c(new CloudMapper::Cloud); registration->align(*c); std::cout << "Fitness Score " << registration->getFitnessScore() << std::endl; return registration->getFinalTransformation(); } Eigen::Matrix4f CloudMapper::GetTransformForTwoCloudsDynamically(CloudMapper::Cloud::Ptr &source, CloudMapper::Cloud::Ptr &target, CloudMapper::DynamicICPParams &dynamicParams) { int icpRound = 1; CloudMapper::Cloud::Ptr result(new Cloud); pcl::copyPointCloud(*source, *result); Eigen::Matrix4f finalTransformationMatrix = Eigen::Matrix4f::Identity(); for (auto it = dynamicParams.begin(); it != dynamicParams.end(); ++it) { auto params = *it; Eigen::Matrix4f stepTransform = Eigen::Matrix4f::Identity(); if (m_usePointToPlaneMetric) { stepTransform = GetTransformForTwoCloudsPointToPlane(result, target, params); } else { stepTransform = GetTransformForTwoCloudsPointToPoint(result, target, params); } pcl::transformPointCloud(*result, *result, stepTransform); finalTransformationMatrix = finalTransformationMatrix * stepTransform; } return finalTransformationMatrix; } Eigen::Matrix4f CloudMapper::GetTransformHavingLandmarks(CloudMapper::Cloud::Ptr &source, Common::Landmarks &l1, CloudMapper::Cloud::Ptr &target, Common::Landmarks &l2) { pcl::CorrespondencesPtr correspondences = LandmarksToCorresponencies(source, l1, target, l2); return GetTransformHavingCorrespondences(source, target, correspondences); } Eigen::Matrix4f CloudMapper::GetTransformHavingCorrespondences(CloudMapper::Cloud::Ptr &source, CloudMapper::Cloud::Ptr &target, pcl::CorrespondencesPtr &correspondences) { Eigen::Matrix4f initialAlignmentMatrix = ComputeInitialAlignment(source, target, correspondences); bool initialAlignmentSucceed = IsNotNAN(initialAlignmentMatrix); CloudMapper::Cloud::Ptr sourceTransformed(new CloudMapper::Cloud); if (initialAlignmentSucceed) { pcl::transformPointCloud(*source, *sourceTransformed, initialAlignmentMatrix); } else { sourceTransformed = source; } if (m_estimateScale) { ShowTwoCloudsInDifferentColors(sourceTransformed, target, pcl::CorrespondencesPtr(), "After IA"); } CloudMapper::DynamicICPParams dynamicParams = m_dynamicICPParams; if (m_useDynamicICP == false) { dynamicParams.push_back(m_icpParams); } CloudMapper::Cloud::Ptr result = sourceTransformed; Eigen::Matrix4f finalTransformationMatrix = GetTransformForTwoCloudsDynamically(sourceTransformed, target, dynamicParams); Eigen::Matrix4f totalTransform; if (initialAlignmentSucceed) { totalTransform = finalTransformationMatrix * initialAlignmentMatrix; } else { totalTransform = finalTransformationMatrix; } pcl::transformPointCloud(*source, *result, totalTransform); return totalTransform; } CloudMapper::Clouds CloudMapper::AlignAll(CloudMapper::Clouds &clouds, CloudMapper::LandmarksVector &landmarksVector, TransformationsPtr transformations) { TransformationsPtr allTransformations(new Transformations); if (transformations != nullptr) { allTransformations = transformations; } if (clouds.size() != landmarksVector.size()) { throw HPEException("CloudMapper::AlignAll - clouds.size() != landmarksVector.size()"); } std::vector stepTransformations; pcl::PointCloud::Ptr sourceCloud = clouds[0]; for (int cloudIndex = 1; cloudIndex < clouds.size() - 1; cloudIndex++) { int sourceIndex = cloudIndex; int targetIndex = sourceIndex + 1; pcl::PointCloud::Ptr targetCloud = clouds[targetIndex]; auto totalTransform = GetTransformHavingLandmarks(sourceCloud, landmarksVector[sourceIndex], targetCloud, landmarksVector[targetIndex]); Eigen::Matrix3f rotationMatrix = totalTransform.block<3, 3>(0, 0); Eigen::Vector3f translationVector = totalTransform.block<3, 1>(0, 3); Eigen::Quaternionf quaternion(rotationMatrix); Eigen::Matrix3f rotation2(quaternion); pcl::transformPointCloud(*sourceCloud, *sourceCloud, totalTransform); pcl::PointCloud::Ptr newCloud(new pcl::PointCloud); *sourceCloud += *targetCloud; sourceCloud = Voxelize(sourceCloud, 0.006f); //ShowCloud(sourceCloud); if (IsNotNAN(totalTransform) == false) { int q = 42; } std::cout << cloudIndex << std::endl; stepTransformations.push_back(totalTransform); } ShowCloud(sourceCloud); CloudMapper::Clouds result; auto targetCloud = clouds[clouds.size() - 1]; for (int cloudIndex = 0; cloudIndex < clouds.size() - 1; cloudIndex++) { Eigen::Matrix4f accumulatedMatrix = stepTransformations[cloudIndex]; for (int matrixIndex = cloudIndex + 1; matrixIndex < stepTransformations.size(); matrixIndex++) { accumulatedMatrix = accumulatedMatrix * stepTransformations[matrixIndex]; } CloudMapper::Cloud::Ptr transformed(new CloudMapper::Cloud); pcl::transformPointCloud(*clouds[cloudIndex], *transformed, accumulatedMatrix); allTransformations->push_back(accumulatedMatrix); result.push_back(transformed); } result.push_back(targetCloud); return result; } Eigen::Matrix4f CloudMapper::ComputeInitialAlignment(Cloud::Ptr &cloud1, Cloud::Ptr &cloud2, pcl::CorrespondencesPtr &correspondences) { if (correspondences->empty()) { return Eigen::Matrix4f::Identity(); } pcl::registration::CorrespondenceRejectorSampleConsensus sac; sac.setInlierThreshold(0.5); sac.setInputSource(cloud1); sac.setInputTarget(cloud2); sac.setInputCorrespondences(correspondences); pcl::Correspondences inliers; sac.getCorrespondences(inliers); Eigen::Matrix4f transformation; if (m_estimateScale) { pcl::registration::TransformationEstimationSVDScale transformationEstimation; transformationEstimation.estimateRigidTransformation(*cloud1, *cloud2, inliers, transformation); } else { pcl::registration::TransformationEstimationSVD transformationEstimation; transformationEstimation.estimateRigidTransformation(*cloud1, *cloud2, inliers, transformation); } if (transformation == Eigen::Matrix4f::Identity() || IsNotNAN(transformation) == false) { std::cout << "Identity or NAN" << std::endl; } return transformation; } Common::Landmark CloudMapper::FindClosestPoint(CloudMapper::Cloud::Ptr &cloud, CloudMapper::PointType &p) { pcl::search::KdTree search; search.setInputCloud(cloud); std::vector indices; std::vector distances; search.nearestKSearch(p, 1, indices, distances); Common::Landmark result; result.index = indices[0]; result.point = cloud->at(indices[0]); return result; } pcl::CorrespondencesPtr CloudMapper::LandmarksToCorresponencies(Cloud::Ptr &cloud1, CloudMapper::Landmarks &landmarks1, Cloud::Ptr &cloud2, CloudMapper::Landmarks &landmarks2) { pcl::CorrespondencesPtr result(new pcl::Correspondences); int resultSize = std::min(landmarks1.size(), landmarks2.size()); for (int i = 0; i < resultSize; i++) { auto sourcePoint1 = landmarks1.at(i).point; auto sourcePoint2 = landmarks2.at(i).point; auto p1IsNotNan = IsNotNAN(sourcePoint1); auto p2IsNotNan = IsNotNAN(sourcePoint2); if (p1IsNotNan && p2IsNotNan) { CloudMapper::Landmarks::value_type pt1 = FindClosestPoint(cloud1, sourcePoint1); CloudMapper::Landmarks::value_type pt2 = FindClosestPoint(cloud2, sourcePoint2); float distance = CalculateDistance(pt1.point, pt2.point); pcl::Correspondence correspondence(pt1.index, pt2.index, distance); result->push_back(correspondence); } } return result; } bool CloudMapper::IsNotNAN(const Eigen::Matrix4f &x) { return ((x.array() == x.array())).all(); } bool CloudMapper::IsNotNAN(CloudMapper::PointType &point) { return (boost::math::isnan(point.x) || boost::math::isnan(point.y) || boost::math::isnan(point.z)) == false; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/CloudMapper.h ================================================ #ifndef CLOUDMAPPER_H #define CLOUDMAPPER_H #include #include #include #include #include //#include "Provider_Old/DataProviderOld.h" #include namespace hpe { /** \class CloudMapper \brief Class that performs Iterative closest point alighment. */ class CloudMapper { public: typedef pcl::PointXYZRGBA PointType; typedef pcl::PointCloud Cloud; typedef pcl::PointNormal PointNormal; typedef pcl::PointCloud NormalCloud; typedef Common::Landmarks Landmarks; typedef std::vector LandmarksVector; typedef std::vector Clouds; typedef std::vector Transformations; typedef std::shared_ptr> TransformationsPtr; struct ICPParams { ICPParams(float distance, float threshold, float epsilon, int maxICPIterations, int maxRansacIterations) : MaxCorrespondenceDistance(distance), RansacRejectionThreshold(threshold), TransformationEpsilon(epsilon), MaxICPIterations(maxICPIterations), MaxRANSACIterations(maxRansacIterations) { } float MaxCorrespondenceDistance; float RansacRejectionThreshold; float TransformationEpsilon; int MaxICPIterations; int MaxRANSACIterations; }; typedef std::vector DynamicICPParams; CloudMapper(); void AddCloud(Cloud::Ptr &cloud, Landmarks &landmarks); Clouds GetAlignedClouds(TransformationsPtr transformations = nullptr); Cloud::Ptr GetMergedCloud(); TransformationsPtr GetTransforms(); void SetICPParams(ICPParams params); void SetDynamicICPParams(DynamicICPParams ¶ms); void SetUseDynamicICP(bool use); void SetUseNormalShooting(bool use); void SetEstimateScale(bool estimateScale); void SetUsePointToPlaneMetric(bool use); void SetWeights(std::vector &weights) { m_weights = weights; } Eigen::Matrix4f GetTransformForTwoCloudsPointToPlane(Cloud::Ptr &source, Cloud::Ptr &target, ICPParams params = ICPParams(0.05, 0.05, 0.000001, 100000, 100)); Eigen::Matrix4f GetTransformForTwoCloudsPointToPoint(Cloud::Ptr &source, Cloud::Ptr &target, ICPParams params = ICPParams(0.05, 0.05, 0.000001, 100000, 100)); Eigen::Matrix4f GetTransformForTwoCloudsDynamically(Cloud::Ptr &source, Cloud::Ptr &target, DynamicICPParams &dynamicParams); Eigen::Matrix4f GetTransformHavingLandmarks(Cloud::Ptr &source, Common::Landmarks &l1, Cloud::Ptr &target, Common::Landmarks &l2); Eigen::Matrix4f GetTransformHavingCorrespondences(Cloud::Ptr &source, Cloud::Ptr &target, pcl::CorrespondencesPtr &correspondences); pcl::CorrespondencesPtr LandmarksToCorresponencies(Cloud::Ptr &cloud1, Landmarks &landmarks1, Cloud::Ptr &cloud2, Landmarks &landmarks2); Eigen::Matrix4f ComputeInitialAlignment(Cloud::Ptr &cloud1, Cloud::Ptr &cloud2, pcl::CorrespondencesPtr &correspondencies); private: Clouds m_pointClouds; LandmarksVector m_landmarks; Clouds AlignAll(Clouds &clouds, LandmarksVector &landmarksVector, TransformationsPtr transformations = nullptr); static Common::Landmark FindClosestPoint(Cloud::Ptr &cloud, PointType &p); ICPParams m_icpParams; DynamicICPParams m_dynamicICPParams; bool m_useDynamicICP; bool m_usePointToPlaneMetric; bool m_useNormalShooting; bool m_estimateScale; std::vector m_weights; static bool IsNotNAN(const Eigen::Matrix4f &x); static bool IsNotNAN(CloudMapper::PointType &point); }; } #endif // CLOUDMAPPER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/IHeadTemplateCreator.h ================================================ #ifndef IHEADTEMPLATECREATOR_H #define IHEADTEMPLATECREATOR_H #include #include namespace hpe { template class IHeadTemplateCreator : public IInterface { public: virtual typename pcl::PointCloud::Ptr GetTemplate() = 0; }; } #endif // IHEADTEMPLATECREATOR_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/IncrementalHeadTemplateCreator.cpp ================================================ #include "IncrementalHeadTemplateCreator.h" #include #include #include #include namespace hpe { IncrementalHeadTemplateCreator::IncrementalHeadTemplateCreator(void) : m_template(new CloudType), m_targetCloud(new CloudType) { m_mapper.SetUseNormalShooting(false); m_mapper.SetUsePointToPlaneMetric(true); m_icpParams.push_back(CloudMapper::ICPParams(0.03, 0.005, 10e-6, 400, 10)); m_mapper.SetDynamicICPParams(m_icpParams); m_aggregatedTransform = Eigen::Matrix4f::Identity(); } IncrementalHeadTemplateCreator::~IncrementalHeadTemplateCreator(void) { } IncrementalHeadTemplateCreator::CloudType::Ptr IncrementalHeadTemplateCreator::GetTemplate() { CloudType::Ptr result(new CloudType); pcl::copyPointCloud(*m_template, *result); return result; } IncrementalHeadTemplateCreator::CloudType::Ptr IncrementalHeadTemplateCreator::AddCloudToTemplate(CloudType::Ptr cloud) { if (m_template->points.size() == 0) { pcl::copyPointCloud(*cloud, *m_template); pcl::copyPointCloud(*cloud, *m_targetCloud); } else { pcl::ScopeTime t("AddCloudToTemplate"); Eigen::Matrix4f transformMatrix = m_mapper.GetTransformForTwoCloudsDynamically(cloud, m_targetCloud, m_icpParams); m_aggregatedTransform *= transformMatrix; CloudType::Ptr transformedSource(new CloudType); pcl::transformPointCloud(*cloud, *transformedSource, m_aggregatedTransform); *m_template += *transformedSource; m_template = Voxelize(m_template, 0.001f); pcl::copyPointCloud(*cloud, *m_targetCloud); } return GetTemplate(); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/IncrementalHeadTemplateCreator.h ================================================ #pragma once #ifndef INCREMENTALHEADTEMPLATECREATOR_H #define INCREMENTALHEADTEMPLATECREATOR_H #include #include #include #include namespace hpe { /** \class IncrementalHeadTemplateCreator \brief Class that performs incremental alighment. \author Sergey \date 8/11/2015 */ class IncrementalHeadTemplateCreator : public IHeadTemplateCreator { public: typedef pcl::PointXYZRGBA PointType; typedef pcl::PointCloud CloudType; IncrementalHeadTemplateCreator(void); ~IncrementalHeadTemplateCreator(void); CloudType::Ptr GetTemplate(); CloudType::Ptr AddCloudToTemplate(CloudType::Ptr cloud); private: CloudType::Ptr m_template; CloudType::Ptr m_targetCloud; CloudMapper m_mapper; CloudMapper::DynamicICPParams m_icpParams; Eigen::Matrix4f m_aggregatedTransform; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/TransformationEstimationHPE.cpp ================================================ #include "TransformationEstimationHPE.h" ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Align/TransformationEstimationHPE.h ================================================ #ifndef TRANSFORMATIONESTIMATION_H #define TRANSFORMATIONESTIMATION_H #include #include namespace hpe { /** \class TransformationEstimationHPE \brief Class that performs history-based weighting of the points for ICP Thanks to this, ICP is speeded up 4 times. The method overloads the standard pcl class to get the desired functionality \tparam PointSource PointType of the source point cloud. \tparam PointTarget PointType of the target point cloud. \tparam MatScalar scalar type. */ template class TransformationEstimationHPE : public pcl::registration::TransformationEstimationPointToPlaneWeighted { typedef TransformationEstimationHPE Self; typedef pcl::registration::TransformationEstimationPointToPlaneWeighted Base; typedef typename Base::Matrix4 Matrix4; public: typedef boost::shared_ptr> Ptr; TransformationEstimationHPE() { } void SetWeights(std::vector &weights) { m_weights = weights; } inline void estimateRigidTransformation( const pcl::PointCloud &cloud_src, const pcl::PointCloud &cloud_tgt, Matrix4 &transformation_matrix) const { Base::estimateRigidTransformation(cloud_src, cloud_tgt, transformation_matrix); } inline void estimateRigidTransformation( const pcl::PointCloud &cloud_src, const std::vector &indices_src, const pcl::PointCloud &cloud_tgt, Matrix4 &transformation_matrix) const { Base::estimateRigidTransformation(cloud_src, indices_src, cloud_tgt, transformation_matrix); } void estimateRigidTransformation( const pcl::PointCloud &cloud_src, const std::vector &indices_src, const pcl::PointCloud &cloud_tgt, const std::vector &indices_tgt, Matrix4 &transformation_matrix) const { std::vector weights(indices_src.size()); if (m_weights.size() != 0) { for (int i = 0; i < indices_src.size(); i += 1) { weights[i] = m_weights[indices_src[i]]; } } else { for (int i = 0; i < indices_src.size(); i += 1) { weights[i] = 1; } } Self *self = const_cast(this); self->Base::setWeights(weights); self->Base::setUseCorrespondenceWeights(false); Base::estimateRigidTransformation(cloud_src, indices_src, cloud_tgt, indices_tgt, transformation_matrix); } void estimateRigidTransformation( const pcl::PointCloud &cloud_src, const pcl::PointCloud &cloud_tgt, const pcl::Correspondences &correspondences, Matrix4 &transformation_matrix) const { Base::estimateRigidTransformation(cloud_src, cloud_tgt, correspondences, transformation_matrix); } private: std::vector m_weights; }; } #endif // TRANSFORMATIONESTIMATION_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 2.8) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") project(HeadPoseEstimationFramework) FILE (GLOB_RECURSE SRC_LIST *.cpp *.h) foreach (SRC ${SRC_LIST}) string (FIND ${SRC} "WindowsOnly" FOUND) if (NOT ${FOUND} EQUAL "-1") list (REMOVE_ITEM SRC_LIST ${SRC}) endif() endforeach () find_package(PCL 1.7 REQUIRED) find_package(OpenCV REQUIRED) include_directories(${PCL_INCLUDE_DIRS} ${CMAKE_CURRENT_LIST_DIR}) link_directories(${PCL_LIBRARY_DIRS} ${CMAKE_BINARY_DIR} ) add_definitions(${PCL_DEFINITIONS}) add_library(${PROJECT_NAME} SHARED ${SRC_LIST}) target_link_libraries(${PROJECT_NAME} ${PCL_LIBRARIES} ${OpenCV_LIBS}) ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Common.h ================================================ #pragma once #ifndef COMMON_H #define COMMON_H #include #include namespace hpe { template class Common { public: struct Landmark { Landmark() {} Landmark(int i, PointType p) : index(i), point(p) {} int index; PointType point; }; typedef std::vector Landmarks; }; typedef std::vector PointIndexes; typedef std::list PointHistory; } #endif // COMMON_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Converter/IDataConverter.h ================================================ #ifndef IDATACONVERTER_H #define IDATACONVERTER_H #include #include #include #include "Interface/IInterface.h" namespace hpe { class IDataConverter : public IInterface { public: virtual pcl::PointCloud::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) = 0; }; } #endif // IDATACONVERTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Converter/KinectDataConverter.cpp ================================================ #include "KinectDataConverter.h" #include #include namespace hpe { typedef union { struct { unsigned char Blue; unsigned char Green; unsigned char Red; unsigned char Alpha; }; float float_value; uint32_t long_value; } RGBValue; KinectDataConverter::KinectDataConverter() { } pcl::PointCloud::Ptr KinectDataConverter::DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) { cv::Size depthImageSize = depth.size(); float centerX, centerY; float scaleFactorX, scaleFactorY; ComputeCameraIntrinsics(depthImageSize, centerX, centerY, scaleFactorX, scaleFactorY); pcl::PointCloud::Ptr result(new pcl::PointCloud); result->width = depthImageSize.width; result->height = depthImageSize.height; result->is_dense = false; result->points.resize(depthImageSize.width * depthImageSize.height); for (int y = 0; y < depthImageSize.height; ++y) { for (int x = 0; x < depthImageSize.width; ++x) { pcl::PointXYZRGBA &pt = result->at(x, y); float depthValue = (float) depth.at(y, x); if (depthValue == -1000.f || depthValue == 0.0f || boost::math::isnan(depthValue)) { pt.x = pt.y = pt.z = std::numeric_limits::quiet_NaN(); } else { //depthValue /= 1000; pt.x = (static_cast(x) - centerX) * scaleFactorX * depthValue; pt.y = (centerY - static_cast(y)) * scaleFactorY * depthValue; pt.z = depthValue; cv::Vec3b colorValue = bgr.at(y, x); RGBValue color; color.Red = colorValue[2]; color.Green = colorValue[1]; color.Blue = colorValue[0]; pt.rgba = color.long_value; } } } return result; } void KinectDataConverter::ComputeCameraIntrinsics(cv::Size &imageSize, float ¢erX, float ¢erY, float &scaleFactorX, float &scaleFactorY) { int dims[] = {imageSize.width, imageSize.height}; // The 525 factor default is only true for VGA. If not, we should scale scaleFactorX = scaleFactorY = 1 / 525.f * 640.f / dims[0]; centerX = ((float)dims[0] - 1.f) / 2.f; centerY = ((float)dims[1] - 1.f) / 2.f; } pcl::PointXYZ KinectDataConverter::ConvertOnePoint(float x, float y, float depthValue, cv::Size frameSize) { float centerX, centerY; float scaleFactorX, scaleFactorY; ComputeCameraIntrinsics(frameSize, centerX, centerY, scaleFactorX, scaleFactorY); pcl::PointXYZ pt; pt.x = (static_cast(x) - centerX) * scaleFactorX * depthValue; pt.y = (centerY - static_cast(y)) * scaleFactorY * depthValue; pt.z = depthValue; return pt; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Converter/KinectDataConverter.h ================================================ #ifndef KINECTDATACONVERTER_H #define KINECTDATACONVERTER_H #include namespace hpe { /** \class KinectDataConverter \brief Converts RGB-D pair to a point cloud. Work only for a kinect sensor. \author Sergey \date 8/11/2015 */ class KinectDataConverter : public IDataConverter { public: KinectDataConverter(); pcl::PointCloud::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) override; pcl::PointXYZ ConvertOnePoint(float x, float y, float depthValue, cv::Size frameSize); protected: virtual void ComputeCameraIntrinsics(cv::Size &imageSize, float ¢erX, float ¢erY, float &scaleFactorX, float &scaleFactorY); }; } #endif // KINECTDATACONVERTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataFilter/IDataFilter.h ================================================ #ifndef IDATAFILTER_H #define IDATAFILTER_H #include "Interface/IInterface.h" namespace hpe { template class DataFilter : public IInterface { public: virtual DataType Filter(DataType &input) = 0; }; } #endif // IDATAFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/CloudXYZRGBA.h ================================================ #pragma once #ifndef CLOUDXYZRGBA_H #define CLOUDXYZRGBA_H #include #include #include namespace hpe { class CloudXYZRGBA : public IDataObject { public: typedef std::shared_ptr Ptr; CloudXYZRGBA() { } CloudXYZRGBA(bool create) { if (create) { cloud = pcl::PointCloud::Ptr(new pcl::PointCloud); } } pcl::PointCloud::Ptr cloud; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/HeadPoseInfo.h ================================================ #ifndef HEADPOSEINFO_H #define HEADPOSEINFO_H #include namespace hpe { class HeadPoseInformation : public IDataObject { public: typedef std::shared_ptr Ptr; HeadPoseInformation() { Yaw = 0; Tilt = 0; Roll = 0; WorldX = 0; WorldY = 0; WorldZ = 0; DepthFrameX = 0; DepthFrameY = 0; HasHeadInfo = false; } double Yaw; double Tilt; double Roll; double WorldX; double WorldY; double WorldZ; double DepthFrameX; double DepthFrameY; bool HasHeadInfo; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/IDataObject.h ================================================ #pragma once #ifndef IDATAOBJECT_H #define IDATAOBJECT_H #include #include namespace hpe { /** \class IDataObject \brief Interface to store the objects to put into IDataStorage. See subclasses for examples. \author Sergey \date 8/11/2015 */ class IDataObject : public IInterface { public: typedef std::shared_ptr Ptr; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/IDataStorage.h ================================================ #pragma once #ifndef IDATASTORAGE_H #define IDATASTORAGE_H #include #include #include #include namespace hpe { /** \class IDataStorage \brief One of the key classes to organize between communications processors. Basically is a key-value storage. \author Sergey \date 8/11/2015 */ class IDataStorage : public IInterface { public: typedef std::shared_ptr Ptr; /** \fn virtual void IDataStorage::Set(std::string key, IDataObject::Ptr object) = 0; \brief Add object into storage with key \author Sergey \date 8/11/2015 \param key Key \param object Object to add. The object should be contained in IDataObject */ virtual void Set(std::string key, IDataObject::Ptr object) = 0; virtual IDataObject::Ptr Get(std::string key) = 0; virtual void Remove(std::string key) = 0; template std::shared_ptr GetAndCast(std::string key) { IDataObject::Ptr obj = Get(key); return std::dynamic_pointer_cast(obj); } template std::shared_ptr GetAndCastNotNull(std::string key, std::string message = "Got nullptr from storage") { std::shared_ptr ptr = GetAndCast(key); if (ptr.get() == nullptr) { throw HPEException(message); } return ptr; } }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/LandmarksObject.h ================================================ #pragma once #ifndef LANDMARKSOBJECT_H #define LANDMARKSOBJECT_H #include #include namespace hpe { template class LandmarksObject : public IDataObject { public: typedef std::shared_ptr Ptr; typename Common::Landmarks landmarks; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/MapDataStorage.cpp ================================================ #include "MapDataStorage.h" namespace hpe { void MapDataStorage::Set(std::string key, IDataObject::Ptr object) { m_storage[key] = object; } IDataObject::Ptr MapDataStorage::Get(std::string key) { return m_storage[key]; } void MapDataStorage::Remove(std::string key) { auto it = m_storage.find(key); if (it != m_storage.end()) { m_storage.erase(it); } } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/MapDataStorage.h ================================================ #pragma once #ifndef MAPDATASTORAGE_H #define MAPDATASTORAGE_H #include #include namespace hpe { class MapDataStorage : public IDataStorage { public: void Set(std::string key, IDataObject::Ptr object); IDataObject::Ptr Get(std::string key); void Remove(std::string key); private: std::map m_storage; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/DataObject/RawFrames.h ================================================ #pragma once #ifndef RAWFRAMES_H #define RAWFRAMES_H #include #include namespace hpe { class RawFrames : public IDataObject { public: typedef std::shared_ptr Ptr; cv::Mat colorFrame; cv::Mat depthFrame; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/DepthDetector.cpp ================================================ #include "DepthDetector.h" #include namespace hpe { DepthDetector::DepthDetector(std::string dataFolder) { Init(dataFolder); } DepthDetector::~DepthDetector(void) { } hpe::headPoseInfo DepthDetector::Detect(cv::Mat depthFrame) { cv::Mat converted(depthFrame.rows, depthFrame.cols, depthFrame.type()); for (int y = 0; y < depthFrame.rows; ++y) { for (int x = 0; x < depthFrame.cols; ++x) { float depthValue = depthFrame.at(y, x); if (depthValue > 1.5 || depthValue == 0.0f || boost::math::isnan(depthValue)) { depthValue = 0; } else { depthValue = depthValue * 1000; } converted.at(y, x) = depthValue; } } headPoseInfo info = estimateHeadAngles(converted, m_cascade, m_paramList); return info; } void DepthDetector::Init(std::string dataFolder) { std::string configFileName = dataFolder + "/config.txt"; loadConfig(configFileName, &m_paramList); std::string modelsDir = dataFolder + "/models/"; for (int i = 0; i < 5; i++) { m_cascade.push_back(readTree(modelsDir, i + 1)); } } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/DepthDetector.h ================================================ #pragma once #ifndef DEPTHDETECTOR_H #define DEPTHDETECTOR_H #include #include "localFunctions.h" #include "Tree.h" namespace hpe { class DepthDetector { public: DepthDetector(std::string dataFolder); ~DepthDetector(void); headPoseInfo Detect(cv::Mat depthFrame); private: void Init(std::string dataFolder); paramList m_paramList; std::vector m_cascade; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/LeafNode.cpp ================================================ #include #include #include #include #include #include #include "LeafNode.h" using namespace cv; using namespace std; using std::vector; namespace hpe { void evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi, LeafNode *leaf) { int leafDim = treeMat.cols;//(int*) mxGetDimensions(input[2]); leaf->mean.create(1, 6, CV_64FC1); int node = initDepth; double pnode = (int)treeMat.at(node, 1); int incr = 0; int xa1, ya1, xa2, ya2, xb1, yb1, xb2, yb2; while (pnode == -1) { xa1 = roi.x + (int)treeMat.at(node, 2); ya1 = roi.y + (int)treeMat.at(node, 3); xa2 = xa1 + (int)treeMat.at(node, 6); ya2 = ya1 + (int)treeMat.at(node, 7); xb1 = roi.x + (int)treeMat.at(node, 4); yb1 = roi.y + (int)treeMat.at(node, 5); xb2 = xb1 + (int)treeMat.at(node, 8); yb2 = yb1 + (int)treeMat.at(node, 9); double numrA = patchInt.at(ya2, xa2) - patchInt.at(ya2, xa1) - patchInt.at(ya1, xa2) + patchInt.at(ya1, xa1); double numA = maskInt.at(ya2, xa2) - maskInt.at(ya2, xa1) - maskInt.at(ya1, xa2) + maskInt.at(ya1, xa1); double numrB = patchInt.at(yb2, xb2) - patchInt.at(yb2, xb1) - patchInt.at(yb1, xb2) + patchInt.at(yb1, xb1); double numB = maskInt.at(yb2, xb2) - maskInt.at(yb2, xb1) - maskInt.at(yb1, xb2) + maskInt.at(yb1, xb1); double mxA = (numA > 1) ? numA : 1; double mxB = (numB > 1) ? numB : 1; double testVal = numrA / mxA - numrB / mxB; int test = (testVal >= treeMat.at(node, 11)) ? 1 : 0; node = (int)treeMat.at(node, 0) - 1; incr = node + test + 1; node = (int)(treeIdxMap.at(node + incr, 0) - 1); pnode = (int)treeMat.at(node, 1); } /*for(int i=0; i< leafDim; i++) { *out = treeMat.at(node, i); out++; }*/ leaf->pfg = treeMat.at(node, 2); leaf->trace = treeMat.at(node, 3); //cout << treeMat.at(node, 4) << endl; leaf->mean = (Mat_(1, 6) << treeMat.at(node, 4), treeMat.at(node, 5), treeMat.at(node, 6), treeMat.at(node, 7), treeMat.at(node, 8), treeMat.at(node, 9)); /*leaf->mean.at(0) = treeMat.at(node, 4); leaf->mean.at(1) = treeMat.at(node, 5); leaf->mean.at(2) = treeMat.at(node, 6); leaf->mean.at(3) = treeMat.at(node, 7); leaf->mean.at(4) = treeMat.at(node, 8); leaf->mean.at(5) = treeMat.at(node, 9);*/ //return leafPtr; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/LeafNode.h ================================================ #ifndef LEAFNODE_H #define LEAFNODE_H #include #include using namespace cv; using namespace std; namespace hpe { class LeafNode { public: LeafNode() { } ~LeafNode() { mean.release(); } // Probability of belonging to head region double pfg; // mean vector cv::Mat mean; // trace of the covariance matrix double trace; }; //LeafNode* evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi); void evaluatePackedTree(Mat patchInt, Mat maskInt, Mat treeMat, Mat treeIdxMap, int initDepth, const cv::Rect roi, LeafNode *leaf); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/Tree.cpp ================================================ #include #include #include #include #include #include #include "Tree.h" #include "csvMat.h" using namespace cv; using namespace std; using std::vector; #include namespace hpe { Tree readTree(string csvRootName, int treeIdx) { cout << "Reading tree no. " << treeIdx << " from: " << csvRootName << endl; string treeIdxStr; ostringstream convert; convert << treeIdx; treeIdxStr = convert.str(); string specsFileName = csvRootName + "tree00" + treeIdxStr + "_specs.csv"; Mat specsMat; specsMat.create(1, 6, CV_64FC1); specsMat = readCSVMat(specsFileName, 1, 6); Tree t; t.treeMat.create(int(specsMat.at(0)), int(specsMat.at(1)), CV_64FC1); t.idxMap.create(int(specsMat.at(2)), 1, CV_64FC1); string matFileName = csvRootName + "tree00" + treeIdxStr + "_Mat.csv"; string idxMapFileName = csvRootName + "tree00" + treeIdxStr + "_IdxMap.csv"; t.treeMat = readCSVMat(matFileName, int(specsMat.at(0)), int(specsMat.at(1))); t.idxMap = readCSVMat(idxMapFileName, int(specsMat.at(2)), 1); return t; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/Tree.h ================================================ #pragma once #ifndef TREE_H #define TREE_H #include #include using namespace cv; using namespace std; namespace hpe { class Tree { public: Tree() { } ~Tree() { } Mat treeMat, idxMap; }; Tree readTree(string csvRootName, int treeIdx); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/csvMat.cpp ================================================ #include #include #include #include #include #include #include "csvMat.h" using namespace cv; using namespace std; using std::vector; #include namespace hpe { Mat readCSVMat(string csvFrameName, int lin, int col) { Mat output(lin, col, CV_64FC1); ifstream inFile(csvFrameName); string line; int linenum = 0; while (getline(inFile, line)) { linenum++; //cout << "\nLine #" << linenum << ":" << endl; istringstream linestream(line); string item; int itemnum = 0; while (getline(linestream, item, ',')) { itemnum++; //cout << "Item #" << itemnum << ": " << item << endl; double temp = (double)atof(item.c_str()); output.at(linenum - 1, itemnum - 1) = (double)temp; } } return output; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/csvMat.h ================================================ #pragma once #ifndef CSVMAT_H #define CSVMAT_H #include #include #include #include #include #include "opencv2/core/core.hpp" using namespace std; using namespace cv; namespace hpe { Mat readCSVMat(string csvFrameName, int lin, int col); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/localFunctions.cpp ================================================ #include #include #include #include #include #include #include "localFunctions.h" #include #include #include "Tree.h" #include "LeafNode.h" using namespace std; using namespace cv; namespace hpe { void loadConfig(std::string filename, paramList *p, bool verbose) { ifstream in(filename); string dummy; if (in.is_open()) { in >> dummy; in >> p->maskThreshold; in >> dummy; in >> p->stride; in >> dummy; in >> p->patchSize; in >> dummy; in >> p->headPatchProb; in >> dummy; in >> p->headDiameter; in >> dummy; in >> p->headPatchDistTh; in >> dummy; in >> p->minVotes; in >> dummy; in >> p->yawTrace; in >> dummy; in >> p->tiltTrace; in >> dummy; in >> p->ytDiameter; } else { cerr << "File not found " << filename << endl; } in.close(); if (verbose) { cout << endl << "------------------------------------" << endl; cout << "Parameter List: " << endl << endl; cout << "maskThreshold: " << p->maskThreshold << endl; cout << "stride: " << p->stride << endl; cout << "patchSize: " << p->patchSize << endl; cout << "headPatchProb: " << p->headPatchProb << endl; cout << "headDiameter: " << p->headDiameter << endl; cout << "headPatchDistTh: " << p->headPatchDistTh << endl; cout << "minNoVotes: " << p->minVotes << endl; cout << "yawTrace: " << p->yawTrace << endl; cout << "tiltTrace: " << p->tiltTrace << endl; cout << "yawTiltDiameter: " << p->ytDiameter << endl; cout << endl << "------------------------------------" << endl << endl; } } Rect getBoundingBox(const Mat &im3D, paramList p) { Rect bbox; int min_x = im3D.cols; int min_y = im3D.rows; int max_x = 0; int max_y = 0; int p_width = p.patchSize; int p_height = p.patchSize; for (int y = 0; y < im3D.rows; y++) { const double *Mi = im3D.ptr(y); for (int x = 0; x < im3D.cols; x++) { if (Mi[x] > 0) { min_x = MIN(min_x, x); min_y = MIN(min_y, y); max_x = MAX(max_x, x); max_y = MAX(max_y, y); } } } int new_w = max_x - min_x; int new_h = max_y - min_y; bbox.x = MIN(im3D.cols - 1, MAX(0 , min_x)); bbox.y = MIN(im3D.rows - 1, MAX(0 , min_y)); //int new_w = max_x - min_x + p_width; //int new_h = max_y - min_y + p_height; //bbox.x = MIN( im3D.cols-1, MAX( 0 , min_x - p_width/2 ) ); //bbox.y = MIN( im3D.rows-1, MAX( 0 , min_y - p_height/2) ); bbox.width = MAX(0, MIN(new_w, im3D.cols - bbox.x)); bbox.height = MAX(0, MIN(new_h, im3D.rows - bbox.y)); return bbox; } double computeNorm(vector vec1, vector vec2) { double result = 0; int dim = vec1.size(); vector diff(dim); double sumSq = 0; for (int i = 0; i < dim; i++) { diff[i] = vec1[i] - vec2[i]; sumSq += pow(diff[i], 2); } result = sqrt(sumSq); return result; } double computeMean(vector vec) { double sum = 0; for (int k = 0; k < vec.size(); k++) { sum += vec[k]; } return sum / vec.size(); } Mat meanShift(Mat dataPts, double bandWidth, int *maxClustPointer) { *maxClustPointer = -1; int biggestClusterSize = 0; int numDim = dataPts.cols; int numPts = dataPts.rows; int numClust = -1; double bandSq = bandWidth * bandWidth; std::vector initPtInds(numPts); for (int k = 0; k < initPtInds.size(); k++) initPtInds[k] = k; double stopThresh = 1e-3 * bandWidth; Mat clustCent(0, numDim, CV_64FC1); vector beenVisitedFlag(numPts); int numInitPts = numPts; //vector clusterVotes(numPts); while (numInitPts > 0) { int tempInd = (int)std::ceil((float)(numInitPts - 1) * std::rand() / RAND_MAX); double stInd = (double)initPtInds[tempInd]; std::vector myMean(numDim); for (int i = 0; i < numDim; i++) { myMean[i] = (double)dataPts.at(stInd, i); } vector thisClusterVotes(numPts); while (1) { vector sqDistToAll(numPts); for (int i = 0; i < numPts; i++) { double result = 0; for (int j = 0; j < numDim; j++) { double diff = (double)dataPts.at(i, j) - myMean[j]; result += pow(diff, 2); } sqDistToAll[i] = result; } vector inInds(numPts); for (int i = 0; i < numPts; i++) { inInds[i] = sqDistToAll[i] < bandSq; if (inInds[i] == true) { thisClusterVotes[i]++; } } int countTrues = count_if(inInds.begin(), inInds.end(), [](bool b) { return b == true; }); vector myOldMean = myMean; cv::Mat tempData(countTrues, numDim, CV_64FC1); int localIdx = 0; for (int i = 0; i < inInds.size(); i++) { if (inInds[i] == true) { for (int j = 0; j < numDim; j++) { tempData.at(localIdx, j) = (double)dataPts.at(i, j); } localIdx++; beenVisitedFlag[i] = true; } } Mat myMeanMat; cv::reduce(tempData, myMeanMat, 0, CV_REDUCE_AVG); myMeanMat.copyTo(myMean); // compute the 2-norm of the difference-between-means vector double meanNorm = computeNorm(myOldMean, myMean); if (meanNorm < stopThresh) { int mergeWith = -1; for (int cn = 0; cn <= numClust; cn++) { vector localClustCent(numDim); clustCent.row(cn).copyTo(localClustCent); double distToOther = computeNorm(myMean, localClustCent); if (distToOther < bandWidth / 2) { mergeWith = cn; break; } } if (mergeWith >= 0) { Mat localMat = 0.5 * (myMeanMat + clustCent.row(mergeWith)); localMat.copyTo(clustCent.row(mergeWith)); } else { numClust++; clustCent.push_back(myMeanMat); if (tempData.rows > biggestClusterSize) { biggestClusterSize = tempData.rows; *maxClustPointer = *maxClustPointer + 1; } //myMeanMat.copyTo(clustCent.row(numClust)); } break; } } initPtInds.clear(); for (int i = 0; i < beenVisitedFlag.size(); i++) { if (beenVisitedFlag[i] == false) { initPtInds.push_back(i); } } numInitPts = initPtInds.size(); } return clustCent; } headPoseInfo estimateHeadAngles(Mat depthFrame, vector cascade, paramList p) { headPoseInfo eAngles; eAngles.x = 0; eAngles.y = 0; eAngles.tilt = -100; eAngles.yaw = -100; int p_width = p.patchSize; int p_height = p.patchSize; Rect bbox = getBoundingBox(depthFrame, p); //integral image of the depth frame int rows = depthFrame.rows; int cols = depthFrame.cols; Mat depthInt(rows + 1, cols + 1, CV_64FC1); integral(depthFrame, depthInt); //mask depthFrame.convertTo(depthFrame, CV_32FC1); Mat mask(rows, cols, CV_32FC1); mask.setTo(0); cv::threshold(depthFrame, mask, p.maskThreshold, 1, THRESH_BINARY); //integral image of the mask Mat maskInt(rows + 1, cols + 1, CV_64FC1); maskInt.setTo(0); integral(mask, maskInt); //defines the test patch Rect roi = Rect(0, 0, p_width, p_height); int min_no_pixels = p_width * p_height / 10; //vector leaves; LeafNode bufferLeaf; //process each patch int startX = std::max(2, bbox.x + 1 - p_width / 2 + 1); int endX = std::min(bbox.x + bbox.width - p_width / 2 + 1, maskInt.cols - p_width); int startY = std::max(2, bbox.y + 1 - p_height / 2 + 1); int endY = std::min(bbox.y + bbox.height - p_height / 2 + 1, maskInt.rows - p_height); vector headPatchIdxX, headPatchIdxY; for (roi.y = startY; roi.y <= endY; roi.y += p.stride) { for (roi.x = startX; roi.x <= endX; roi.x += p.stride) { int x0 = roi.x; int y0 = roi.y; int x1 = x0 + p_width - 1; int y1 = y0 + p_height - 1; //discard if the patch is filled with data for less than 10% double areaSum = maskInt.at(y1, x1) - maskInt.at(y1, x0 - 1) - maskInt.at(y0 - 1, x1) + maskInt.at(y0, x0); if (areaSum / (p_width * p_height) <= 0.1) continue; evaluatePackedTree(depthInt, maskInt, cascade[0].treeMat, cascade[0].idxMap, 0, roi, &bufferLeaf); // store only patches that are almost certain to belong to the head if (bufferLeaf.pfg > p.headPatchProb) { headPatchIdxX.push_back((double)x0); headPatchIdxY.push_back((double)y0); } } } if (headPatchIdxX.size() > 0) { Mat dataPtsX(headPatchIdxX); Mat dataPtsY(headPatchIdxY); Mat dataPts; hconcat(dataPtsX, dataPtsY, dataPts); int maxClustIdx = -1; // initialize the index of the biggest cluster (in case there are more than 1, it is useful to know which is the biggest) Mat clustCent = meanShift(dataPts, p.headDiameter, &maxClustIdx); vector clustCentVec(dataPts.cols); clustCent.row(maxClustIdx).copyTo(clustCentVec); // once the center of the patches that "think" they belong to the head has been found // it's time to call the X and Y offset trees that will vote for the head center // and the Yaw/Tilt trees for the yaw/tilt predictions Mat votes(0, 8, CV_64FC1); // offsetX, offsetY, offsetZ(not used), yaw, tilt, roll(not used), traceYaw, traceTilt Mat dummyVote(1, 8, CV_64FC1); Mat headPatchDataMat(0, 2, CV_64FC1); Mat headPatchCenterAux(1, 2, CV_64FC1); for (roi.y = startY; roi.y <= endY; roi.y += p.stride) { for (roi.x = startX; roi.x <= endX; roi.x += p.stride) { vector currPoint(clustCentVec.size()); currPoint[0] = roi.x; currPoint[1] = roi.y; int x0 = roi.x; int y0 = roi.y; int x1 = x0 + p_width - 1; int y1 = y0 + p_height - 1; double areaSum = maskInt.at(y1, x1) - maskInt.at(y1, x0 - 1) - maskInt.at(y0 - 1, x1) + maskInt.at(y0, x0); double localNorm = computeNorm(clustCentVec, currPoint); if ((areaSum / (p_width * p_height) > 0.1) & (localNorm <= p.headPatchDistTh)) { headPatchCenterAux.at(0, 0) = (double)(x0 + p.patchSize / 2); headPatchCenterAux.at(0, 1) = (double)(y0 + p.patchSize / 2); headPatchDataMat.push_back(headPatchCenterAux); // offset on X evaluatePackedTree(depthInt, maskInt, cascade[1].treeMat, cascade[1].idxMap, 0, roi, &bufferLeaf); dummyVote.at(0) = x0 + p.patchSize / 2 - bufferLeaf.mean.at(0); // offset on Y evaluatePackedTree(depthInt, maskInt, cascade[2].treeMat, cascade[2].idxMap, 0, roi, &bufferLeaf); dummyVote.at(1) = y0 + p.patchSize / 2 - bufferLeaf.mean.at(1); dummyVote.at(2) = 0; // Yaw evaluatePackedTree(depthInt, maskInt, cascade[3].treeMat, cascade[3].idxMap, 0, roi, &bufferLeaf); dummyVote.at(3) = bufferLeaf.mean.at(3); dummyVote.at(5) = 0; dummyVote.at(6) = bufferLeaf.trace; // Tilt evaluatePackedTree(depthInt, maskInt, cascade[4].treeMat, cascade[4].idxMap, 0, roi, &bufferLeaf); dummyVote.at(4) = bufferLeaf.mean.at(4); dummyVote.at(7) = bufferLeaf.trace; votes.push_back(dummyVote); } } } vector mnHC(2); mnHC[0] = 0; mnHC[1] = 0; Mat votesFin(8, votes.rows, CV_64FC1); transpose(votes, votesFin); vector offsetXVec(votesFin.cols), offsetYVec(votesFin.cols), predYawVec(votesFin.cols); vector predTiltVec(votesFin.cols), traceYawVec(votesFin.cols), traceTiltVec(votesFin.cols); votesFin.row(0).copyTo(offsetXVec); votesFin.row(1).copyTo(offsetYVec); votesFin.row(3).copyTo(predYawVec); votesFin.row(4).copyTo(predTiltVec); votesFin.row(6).copyTo(traceYawVec); votesFin.row(7).copyTo(traceTiltVec); if (offsetXVec.size() > 0) { mnHC[0] = computeMean(offsetXVec); mnHC[1] = computeMean(offsetYVec); eAngles.x = mnHC[0]; eAngles.y = mnHC[1]; vector distHeadCenter(headPatchDataMat.rows); vector distHeadCenterIdx(headPatchDataMat.rows); vector headPatchDataLocal(2); for (int k = 0; k < headPatchDataMat.rows; k++) { headPatchDataMat.row(k).copyTo(headPatchDataLocal); distHeadCenter[k] = computeNorm(headPatchDataLocal, mnHC); distHeadCenterIdx[k] = k; } std::sort(distHeadCenterIdx.begin(), distHeadCenterIdx.end(), [&distHeadCenter](size_t i1, size_t i2) { return distHeadCenter[i1] < distHeadCenter[i2]; }); vector headCenterInd(offsetXVec.size()); headCenterInd.assign(offsetXVec.size(), 0); if (offsetXVec.size() > p.minVotes) { for (int k = 0; k < p.minVotes; k++) { headCenterInd[(int)distHeadCenterIdx[k]] = 1; } } else { headCenterInd.assign(offsetXVec.size(), 1); } vector traceInd(offsetXVec.size()); traceInd.assign(offsetXVec.size(), 0); for (int k = 0; k < offsetXVec.size(); k++) { if ((traceYawVec[k] + traceTiltVec[k]) / 2 < p.yawTrace) { traceInd[k] = 1; } } vector allInds(traceInd.size()); Mat yawTiltPoints(0, 2, CV_64FC1); Mat yawTiltDummy(1, 2, CV_64FC1); for (int k = 0; k < traceInd.size(); k++) { allInds[k] = traceInd[k] * headCenterInd[k]; if (allInds[k] == 1) { yawTiltDummy.at(0) = predYawVec[k]; yawTiltDummy.at(1) = predTiltVec[k]; yawTiltPoints.push_back(yawTiltDummy); } } //std::transform(headCenterInd.begin() + 1, headCenterInd.end(), traceInd.begin()+ 1, std::multiplies()); int maxClustIdx = -1; Mat clustCent = meanShift(yawTiltPoints, p.ytDiameter, &maxClustIdx); if (maxClustIdx != -1 && maxClustIdx < clustCent.rows) { eAngles.yaw = clustCent.at(maxClustIdx, 0); eAngles.tilt = clustCent.at(maxClustIdx, 1); } //clustCent.row(maxClustIdx).copyTo(eAngles); } } return eAngles; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Detector/DepthFrameDetector/localFunctions.h ================================================ #ifndef LOCALFUNCTIONS_H #define LOCALFUNCTIONS_H #include #include #include #include #include #include "opencv2/core/core.hpp" #include "Tree.h" using namespace std; using namespace cv; namespace hpe { struct paramList { double maskThreshold; int stride; int patchSize; double headPatchProb; double headDiameter; double headPatchDistTh; double minVotes; double yawTrace; double tiltTrace; double ytDiameter; }; struct headPoseInfo { double x, y, yaw, tilt; }; void loadConfig(std::string filename, paramList *p, bool verbose = false); Rect getBoundingBox(const Mat &im3D, paramList p); double computeNorm(vector vec1, vector vec2); double computeMean(vector vec); headPoseInfo estimateHeadAngles(Mat depthFrame, vector cascade, paramList p); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Exception/HPEException.cpp ================================================ #include "HPEException.h" HPEException::HPEException(std::string message) : m_message(message) { } const char *HPEException::what() { return m_message.c_str(); } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Exception/HPEException.h ================================================ #ifndef HPEEXCEPTION_H #define HPEEXCEPTION_H #include class HPEException : public std::exception { public: HPEException(std::string message); virtual const char *what(); private: std::string m_message; }; #endif // HPEEXCEPTION_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/FacialExpressions/ferLocalFunctions.cpp ================================================ #include #include #include #include #include #include "opencv2/core/core.hpp" #include "ferLocalFunctions.h" #include #include #include #include #include #include #include namespace fer { using namespace std; using namespace cv; std::string to_string(int val) { std::stringstream str; str << val; return str.str(); } vector get_all_files_names_within_folder(string folder) { vector names; namespace fs = boost::filesystem; fs::directory_iterator end_iter; if (fs::exists(folder) && fs::is_directory(folder)) { for (fs::directory_iterator dir_iter(folder) ; dir_iter != end_iter ; ++dir_iter) { if (fs::is_regular_file(dir_iter->status())) { } } } return names; } Mat RV_readCSVMat(string csvFrameName, int lin, int col) { Mat output(lin, col, CV_64FC1); ifstream inFile(csvFrameName); string line; int linenum = 0; while (getline(inFile, line)) { linenum++; //cout << "\nLine #" << linenum << ":" << endl; istringstream linestream(line); string item; int itemnum = 0; while (getline(linestream, item, ',')) { itemnum++; //cout << "Item #" << itemnum << ": " << item << endl; double temp = (double)atof(item.c_str()); output.at(linenum - 1, itemnum - 1) = (double)temp; } } return output; } void RV_readParamList(const string foldername, paramList *p) { cout << "Reading params from: " << foldername << "..."; ifstream in(foldername + "config.txt"); string dummy; if (in.is_open()) { in >> dummy; in >> p->d; in >> dummy; in >> p->patchSize; in >> dummy; in >> p->stride; in >> dummy; in >> p->featSize; in >> dummy; in >> p->noPatches; in >> dummy; in >> p->noChannels; in >> dummy; in >> p->noTrees; } else { cerr << "File not found " << foldername + "config.txt" << endl; exit(-1); } in.close(); Mat dummyMat(p->d, 16, CV_64FC1); dummyMat.setTo(0); for (int i = 0; i < p->noPatches; i++) { p->patchFeats.push_back(dummyMat); } Mat paramsPatchCoords = RV_readCSVMat(foldername + "paramsPatchCoords.csv", p->noPatches, 2); p->patchCoords = paramsPatchCoords; for (int i = 0; i < p->noPatches; i++) { // rectangle coordinates are imported from Matlab as they are, which means they obey Matlab indexing (starting from 1) // this will be taken care of explicitly at feature computation stage ... Mat patchFeatsAux = RV_readCSVMat(foldername + "patchFeats/patchFeats_" + to_string(i + 1) + ".csv", p->d, 16); Mat patchFeatsAuxInt(patchFeatsAux.rows, patchFeatsAux.cols, CV_8UC1); patchFeatsAux.convertTo(patchFeatsAuxInt, CV_8UC1); p->patchFeats[i] = patchFeatsAuxInt; } cout << "done!" << endl; cout << endl << "------------------------------------" << endl; cout << "Parameter List:" << endl << endl; cout << "d: " << p->d << endl; cout << "patchSize: " << p->patchSize << endl; cout << "stride: " << p->stride << endl; cout << "featSize: " << p->featSize << endl; cout << "noPatches: " << p->noPatches << endl; cout << "noChannels: " << p->noChannels << endl; cout << endl << "------------------------------------" << endl << endl; } vector> RV_readAllForests(string forestDir, paramList params) { vector> allForests(params.noPatches, vector(params.noTrees)); cout << "Reading forests from: " << forestDir << "..."; for (int i = 0; i < params.noPatches; i++) { for (int j = 0; j < params.noTrees; j++) { Mat dummySize = RV_readCSVMat(forestDir + "patch_" + to_string(i + 1) + "_tree_" + to_string(j + 1) + "_size.csv", 1, 1); Mat dummyMat = RV_readCSVMat(forestDir + "patch_" + to_string(i + 1) + "_tree_" + to_string(j + 1) + ".csv", 4, (int)dummySize.at(0, 0)); allForests[i][j].cutVar.reserve(dummyMat.cols); allForests[i][j].cutValue.reserve(dummyMat.cols); allForests[i][j].rightChild.reserve(dummyMat.cols); allForests[i][j].leafVal.reserve(dummyMat.cols); dummyMat.row(0).copyTo(allForests[i][j].cutVar); dummyMat.row(1).copyTo(allForests[i][j].cutValue); dummyMat.row(2).copyTo(allForests[i][j].rightChild); dummyMat.row(3).copyTo(allForests[i][j].leafVal); //cout << "patch: " << i << " tree: " << j << endl; } } cout << "done!" << endl; return allForests; } vector RV_computeHist(vector vec, int noBins) { // supposedly vec has exactly noBins unique values const double eps = 1.0 / vec.size(); vector out(noBins); out.assign(noBins, 0); for (int i = 0; i < (int)vec.size(); i++) { out[(int)vec[i] - 1] += eps; } return out; } vector RV_testForest(Mat localData, vector &localForest, paramList params, int numberOfClasses) { vector votes(params.noTrees); votes.assign(params.noTrees, 0); for (int k = 0; k < params.noTrees; k++) { randomTree tree = localForest[k]; int currNode = 0; while (tree.cutVar[currNode] > 0) { if (localData.at(tree.cutVar[currNode] - 1) > tree.cutValue[currNode]) { currNode = tree.rightChild[currNode]; } else { currNode = tree.rightChild[currNode] - 1; } } votes[k] = tree.leafVal[currNode]; } vector probs = RV_computeHist(votes, numberOfClasses); return probs; } Rect RV_getBoundingBox(const Mat &im3D, paramList p) { Rect bbox; int min_x = im3D.cols; int min_y = im3D.rows; int max_x = 0; int max_y = 0; int p_width = p.patchSize; int p_height = p.patchSize; for (int y = 0; y < im3D.rows; y++) { const double *Mi = im3D.ptr(y); for (int x = 0; x < im3D.cols; x++) { if (Mi[x] > 0) { min_x = MIN(min_x, x); min_y = MIN(min_y, y); max_x = MAX(max_x, x); max_y = MAX(max_y, y); } } } int new_w = max_x - min_x; int new_h = max_y - min_y; bbox.x = MIN(im3D.cols - 1, MAX(0 , min_x)); bbox.y = MIN(im3D.rows - 1, MAX(0 , min_y)); //int new_w = max_x - min_x + p_width; //int new_h = max_y - min_y + p_height; //bbox.x = MIN( im3D.cols-1, MAX( 0 , min_x - p_width/2 ) ); //bbox.y = MIN( im3D.rows-1, MAX( 0 , min_y - p_height/2) ); bbox.width = MAX(0, MIN(new_w, im3D.cols - bbox.x)); bbox.height = MAX(0, MIN(new_h, im3D.rows - bbox.y)); return bbox; } double RV_computeNorm(vector vec1, vector vec2) { double result = 0; int dim = vec1.size(); vector diff(dim); double sumSq = 0; for (int i = 0; i < dim; i++) { diff[i] = vec1[i] - vec2[i]; sumSq += pow(diff[i], 2); } result = sqrt(sumSq); return result; } double RV_computeMean(vector vec) { double sum = 0; for (int k = 0; k < (int)vec.size(); k++) { sum += vec[k]; } return sum / vec.size(); } Mat meanShift(Mat dataPts, double bandWidth, int *maxClustPointer) { *maxClustPointer = -1; int biggestClusterSize = 0; int numDim = dataPts.cols; int numPts = dataPts.rows; int numClust = -1; double bandSq = bandWidth * bandWidth; std::vector initPtInds(numPts); for (int k = 0; k < (int)initPtInds.size(); k++) initPtInds[k] = k; double stopThresh = 1e-3 * bandWidth; Mat clustCent(0, numDim, CV_64FC1); vector beenVisitedFlag(numPts); int numInitPts = numPts; //vector clusterVotes(numPts); while (numInitPts > 0) { int tempInd = (int)std::ceil((numInitPts - 1 - 1e-6) * std::rand() / RAND_MAX); int stInd = initPtInds[tempInd]; std::vector myMean(numDim); for (int i = 0; i < numDim; i++) { myMean[i] = (double)dataPts.at(stInd, i); } vector thisClusterVotes(numPts); while (1) { vector sqDistToAll(numPts); for (int i = 0; i < numPts; i++) { double result = 0; for (int j = 0; j < numDim; j++) { double diff = (double)dataPts.at(i, j) - myMean[j]; result += pow(diff, 2); } sqDistToAll[i] = result; } vector inInds(numPts); for (int i = 0; i < numPts; i++) { inInds[i] = sqDistToAll[i] < bandSq; if (inInds[i] == true) { thisClusterVotes[i]++; } } int countTrues = count_if(inInds.begin(), inInds.end(), [](bool b) { return b == true; }); vector myOldMean = myMean; cv::Mat tempData(countTrues, numDim, CV_64FC1); int localIdx = 0; for (int i = 0; i < (int)inInds.size(); i++) { if (inInds[i] == true) { for (int j = 0; j < numDim; j++) { tempData.at(localIdx, j) = (double)dataPts.at(i, j); } localIdx++; beenVisitedFlag[i] = true; } } Mat myMeanMat; reduce(tempData, myMeanMat, 0, CV_REDUCE_AVG); myMeanMat.copyTo(myMean); // compute the 2-norm of the difference-between-means vector double meanNorm = RV_computeNorm(myOldMean, myMean); if (meanNorm < stopThresh) { int mergeWith = -1; for (int cn = 0; cn <= numClust; cn++) { vector localClustCent(numDim); clustCent.row(cn).copyTo(localClustCent); double distToOther = RV_computeNorm(myMean, localClustCent); if (distToOther < bandWidth / 2) { mergeWith = cn; break; } } if (mergeWith >= 0) { Mat localMat = 0.5 * (myMeanMat + clustCent.row(mergeWith)); localMat.copyTo(clustCent.row(mergeWith)); } else { numClust++; clustCent.push_back(myMeanMat); if (tempData.rows > biggestClusterSize) { biggestClusterSize = tempData.rows; *maxClustPointer = *maxClustPointer + 1; } //myMeanMat.copyTo(clustCent.row(numClust)); } break; } } initPtInds.clear(); for (int i = 0; i < (int)beenVisitedFlag.size(); i++) { if (beenVisitedFlag[i] == false) { initPtInds.push_back(i); } } numInitPts = initPtInds.size(); } return clustCent; } Mat RV_preprocessDepthFrame(Mat frame) { Mat output; double minVal = 100; double maxVal = -100; Point minLoc, maxLoc; for (int i = 0; i < frame.rows; i++) { for (int j = 0; j < frame.cols; j++) { if ((frame.at(i, j) < minVal) & (frame.at(i, j) != -100)) { minVal = frame.at(i, j); minLoc.x = i; minLoc.y = j; } if (frame.at(i, j) > maxVal) { maxVal = frame.at(i, j); maxLoc.x = i; maxLoc.y = j; } } } for (int i = 0; i < frame.rows; i++) { for (int j = 0; j < frame.cols; j++) { if (frame.at(i, j) != -100) { frame.at(i, j) = 255 - (frame.at(i, j) - minVal) / (maxVal - minVal) * 254; } else { frame.at(i, j) = 0; } } } transpose(frame, output); return output; } void RV_featExtractionSingleFrame(Mat frame, paramList params, vector *bgPerc, vector &featData) { featData.clear(); featData.resize((unsigned int)params.noPatches); for (int k = 0; k < params.noPatches; k++) { Mat dummyMat(1, (int)(params.d * params.noChannels), CV_64FC1); dummyMat.setTo(0); featData[k] = dummyMat; } int rows = frame.rows; int cols = frame.cols; vector channels = RV_channelsExtractionSingleFrame(frame, params); vector channelsInt(params.noChannels); vector maskInt(params.noChannels); for (int i = 0; i < params.noChannels; i++) { Mat localChannel = channels[i]; Mat localChannelInt(rows + 1, cols + 1, CV_64FC1); localChannelInt.setTo(0); integral(localChannel, localChannelInt); Mat localMask(rows, cols, CV_32FC1); localMask.setTo(0); localChannel.convertTo(localChannel, CV_32FC1); // for some (probably) stupid reason, // m***** f***** "threshold" function does not work with CV_64FC1... threshold(localChannel, localMask, 0.001, 1, THRESH_BINARY); Mat localMaskInt(rows + 1, cols + 1, CV_64FC1); localMaskInt.setTo(0); integral(localMask, localMaskInt); channelsInt[i] = localChannelInt; maskInt[i] = localMaskInt; } for (int k = 0; k < params.patchCoords.rows; k++) { Mat frameAux(frame, Rect((int)params.patchCoords.at(k, 1), (int)params.patchCoords.at(k, 0), (int)params.patchSize, (int)params.patchSize)); double countBG = 0; for (int i = 1; i < frameAux.rows; i++) // first row and column are not covered by the rectangular features // so yes, index should start from 1 { for (int j = 1; j < frameAux.cols; j++) // same here { if (frameAux.at(i, j) == 0) { countBG++; } } } (*bgPerc)[k] = countBG / (params.patchSize - 1) / (params.patchSize - 1); vector localImageInt(params.noChannels); vector localMaskInt(params.noChannels); for (int i = 0; i < params.noChannels; i++) { Mat dummyMat1(channelsInt[i], Rect((int)params.patchCoords.at(k, 1) + 1, (int)params.patchCoords.at(k, 0) + 1, (int)params.patchSize, (int)params.patchSize)); localImageInt[i] = dummyMat1; Mat dummyMat2(maskInt[i], Rect((int)params.patchCoords.at(k, 1) + 1, (int)params.patchCoords.at(k, 0) + 1, (int)params.patchSize, (int)params.patchSize)); localMaskInt[i] = dummyMat2; } Mat feats = RV_computeDiffFeats(localImageInt, localMaskInt, params.patchFeats[k]); featData[k] = feats; } // return featData; } Mat RV_computeDiffFeats(vector imageInt, vector maskInt, Mat localFeat) { Mat feats(1, localFeat.rows * imageInt.size(), CV_64FC1); feats.setTo(0); //int numRectangles = params.patchFeats[0].cols / 4; for (int i = 0; i < (int)imageInt.size(); i++) { Mat localImg = imageInt[i]; Mat localMask = maskInt[i]; for (int k = 0; k < localFeat.rows; k++) { //vector v(localFeat.cols); //localFeat.row(k).copyTo(v); // keep in mind that the coordinates of the rectangles are obeying Matlab indexing, therefore adaptation is performed explicitly here double a1 = localImg.at(localFeat.at(k, 3) - 1, localFeat.at(k, 2) - 1) - localImg.at(localFeat.at(k, 3) - 1, localFeat.at(k, 0) - 2) - localImg.at(localFeat.at(k, 1) - 2, localFeat.at(k, 2) - 1) + localImg.at(localFeat.at(k, 1) - 2, localFeat.at(k, 0) - 2); double a2 = localImg.at(localFeat.at(k, 7) - 1, localFeat.at(k, 6) - 1) - localImg.at(localFeat.at(k, 7) - 1, localFeat.at(k, 4) - 2) - localImg.at(localFeat.at(k, 5) - 2, localFeat.at(k, 6) - 1) + localImg.at(localFeat.at(k, 5) - 2, localFeat.at(k, 4) - 2); /*double a3 = localImg.at(localFeat.at(k, 11) - 1, localFeat.at(k, 10) - 1) - localImg.at(localFeat.at(k, 11) - 1, localFeat.at(k, 8) - 2) - localImg.at(localFeat.at(k, 9) - 2, localFeat.at(k, 10) - 1) + localImg.at(localFeat.at(k, 9) - 2, localFeat.at(k, 8) - 2); double a4 = localImg.at(localFeat.at(k, 15) - 1, localFeat.at(k, 14) - 1) - localImg.at(localFeat.at(k, 15) - 1, localFeat.at(k, 12) - 2) - localImg.at(localFeat.at(k, 13) - 2, localFeat.at(k, 14) - 1) + localImg.at(localFeat.at(k, 13) - 2, localFeat.at(k, 12) - 2);*/ double z1 = localMask.at(localFeat.at(k, 3) - 1, localFeat.at(k, 2) - 1) - localMask.at(localFeat.at(k, 3) - 1, localFeat.at(k, 0) - 2) - localMask.at(localFeat.at(k, 1) - 2, localFeat.at(k, 2) - 1) + localMask.at(localFeat.at(k, 1) - 2, localFeat.at(k, 0) - 2); double z2 = localMask.at(localFeat.at(k, 7) - 1, localFeat.at(k, 6) - 1) - localMask.at(localFeat.at(k, 7) - 1, localFeat.at(k, 4) - 2) - localMask.at(localFeat.at(k, 5) - 2, localFeat.at(k, 6) - 1) + localMask.at(localFeat.at(k, 5) - 2, localFeat.at(k, 4) - 2); /*double z3 = localMask.at(localFeat.at(k, 11) - 1, localFeat.at(k, 10) - 1) - localMask.at(localFeat.at(k, 11) - 1, localFeat.at(k, 8) - 2) - localMask.at(localFeat.at(k, 9) - 2, localFeat.at(k, 10) - 1) + localMask.at(localFeat.at(k, 9) - 2, localFeat.at(k, 8) - 2); double z4 = localMask.at(localFeat.at(k, 15) - 1, localFeat.at(k, 14) - 1) - localMask.at(localFeat.at(k, 15) - 1, localFeat.at(k, 12) - 2) - localMask.at(localFeat.at(k, 13) - 2, localFeat.at(k, 14) - 1) + localMask.at(localFeat.at(k, 13) - 2, localFeat.at(k, 12) - 2);*/ double mz1 = (z1 > 1) ? z1 : 1; double mz2 = (z2 > 1) ? z2 : 1; /*double mz3 = (z3 > 1) ? z3 : 1; double mz4 = (z4 > 1) ? z4 : 1;*/ //double featVal = a1 / mz1 + a3 / mz3 - a2 / mz2 - a4 / mz4; double featVal = a1 / mz1 - a2 / mz2; feats.at(0, i * localFeat.rows + k) = featVal; } } return feats; } vector RV_channelsExtractionSingleFrame(Mat frame, paramList params) { vector output(params.noChannels); for (int i = 0; i < params.noChannels; i++) { Mat buff(frame.rows, frame.cols, CV_64FC1); buff.setTo(0); output[i] = buff; } output[0] = frame; Mat dx(frame.rows, frame.cols, CV_64FC1); Mat dy(frame.rows, frame.cols, CV_64FC1); Sobel(frame, dx, CV_64FC1, 0, 1); Sobel(frame, dy, CV_64FC1, 1, 0); Mat magnitudeAux(frame.rows, frame.cols, CV_64FC1); Mat orientation(frame.rows, frame.cols, CV_64FC1); for (int i = 0; i < frame.rows; i++) { for (int j = 0; j < frame.cols; j++) { magnitudeAux.at(i, j) = sqrt(pow(dx.at(i, j), 2) + pow(dy.at(i, j), 2)); orientation.at(i, j) = atan2(dx.at(i, j), dy.at(i, j)); } } Mat magnitude(frame.rows, frame.cols, CV_64FC1); log(magnitudeAux + 1.0, magnitude); // switch to log scale to attenuate differences... output[1] = magnitude; /*namedWindow("frame", CV_WINDOW_AUTOSIZE); int flag = RV_imshow("frame", frame); namedWindow("magnitude", CV_WINDOW_AUTOSIZE); flag = RV_imshow("magnitude", magnitude); namedWindow("orientation", CV_WINDOW_AUTOSIZE); flag = RV_imshow("orientation", orientation);*/ //setMouseCallback("image2", onMouse, 0); // compute gradient histograms for each of the 6 discrete orientation intervals // if an orient falls inside a particular interval, we assign the pixel a value proportional // to the gradient magnitude weighted by a peaked Gaussian whose mean is the center of the // orientation interval and std = 1/(sqrt(2*pi))/5 (5 comes from the attempt of replicating Dollars toolbox) // of course, pdf(mu) should be 1, so we compensate the small std by dividing the pdf by 5 // using Sobel gradient results in orients in the range (-pi;pi) therefore we split this interval in 6 complementary subintervals double sigma = 1 / sqrt(2 * M_PI) / 5; for (int k = 0; k < 6; k++) { double mu = -5 * M_PI / 6 + k * 2 * M_PI / 6; Mat localChannel(frame.rows, frame.cols, CV_64FC1); for (int i = 0; i < frame.rows; i++) { for (int j = 0; j < frame.cols; j++) { double valuePdf = RV_gaussPdf(orientation.at(i, j), mu, sigma) / 5; localChannel.at(i, j) = magnitude.at(i, j) * valuePdf; } } output[k + 2] = localChannel; } // compute the LBP map of the original (grayscaled) depth image Mat lbpMap = RV_lbp(frame); Mat lbpMap64(lbpMap.rows, lbpMap.cols, CV_64FC1); lbpMap.convertTo(lbpMap64, CV_64FC1); for (int i = 1; i < frame.rows - 1; i++) { for (int j = 1; j < frame.cols - 1; j++) { output[8].at(i, j) = lbpMap64.at(i - 1, j - 1); } } return output; } double RV_gaussPdf(double x, double mu, double sigma) { /* x = double(int(x * 10)) / 10; static map pdfMap; if (pdfMap.find(x) != pdfMap.end()) { return pdfMap[x]; }*/ double aux = -pow(x - mu, 2) / (2 * pow(sigma, 2)); double out = 1 / (sqrt(2 * M_PI) * sigma) * exp(aux); /*pdfMap[x] = out;*/ return out; } int RV_imshow(string windowName, Mat img) { double minVal, maxVal; Mat draw(img.rows, img.cols, CV_8UC1); minMaxLoc(img, &minVal, &maxVal); img.convertTo(draw, CV_8UC1, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal)); imshow(windowName, draw); return 1; } Mat RV_lbp(const Mat &src) { Mat dst; dst = Mat::zeros(src.rows - 2, src.cols - 2, CV_8UC1); for (int i = 1; i < src.rows - 1; i++) { for (int j = 1; j < src.cols - 1; j++) { uchar center = (uchar)src.at(i, j); unsigned char code = 0; code |= ((uchar)src.at(i - 1, j - 1) > center) << 7; code |= ((uchar)src.at(i - 1, j) > center) << 6; code |= ((uchar)src.at(i - 1, j + 1) > center) << 5; code |= ((uchar)src.at(i, j + 1) > center) << 4; code |= ((uchar)src.at(i + 1, j + 1) > center) << 3; code |= ((uchar)src.at(i + 1, j) > center) << 2; code |= ((uchar)src.at(i + 1, j - 1) > center) << 1; code |= ((uchar)src.at(i, j - 1) > center) << 0; dst.at(i - 1, j - 1) = code; } } return 255 - dst; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/FacialExpressions/ferLocalFunctions.h ================================================ #ifndef FERLOCALFUNCTIONS_H #define FERLOCALFUNCTIONS_H #include #include #include #include #include #include "opencv2/core/core.hpp" namespace fer { # define M_PI 3.14159265358979323846 /* pi */ using namespace std; using namespace cv; struct paramList { int d; int patchSize; int stride; int featSize; int noPatches; int noChannels; int noTrees; Mat patchCoords; vector patchFeats; }; struct randomTree { vector cutVar, cutValue, rightChild, leafVal; }; struct headPoseInfo { double x, y, yaw, tilt; }; vector get_all_files_names_within_folder(string folder); Mat RV_readCSVMat(string csvFrameName, int lin, int col); void RV_readParamList(const string foldername, paramList *p); vector> RV_readAllForests(string forestDir, paramList params); vector RV_computeHist(vector vec, int noBins); vector RV_testForest(Mat localData, vector &localForest, paramList params, int numberOfClasses); Rect RV_getBoundingBox(const Mat &im3D, paramList p); double RV_computeNorm(vector vec1, vector vec2); double RV_computeMean(vector vec); Mat RV_preprocessDepthFrame(Mat frameIn); void RV_featExtractionSingleFrame(Mat frame, paramList params, vector *bgPerc, vector &result); vector RV_channelsExtractionSingleFrame(Mat frame, paramList params); Mat RV_computeDiffFeats(vector imageInt, vector maskInt, Mat localFeat); double RV_gaussPdf(double x, double mu, double sigma); int RV_imshow(string windowName, Mat img); Mat RV_lbp(const Mat &src); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/CylinderOptimizedFeatureCalculator.cpp ================================================ //#include "CylinderOptimizedFeatureCalculator.h" namespace hpe { } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/CylinderOptimizedFeatureCalculator.h ================================================ #pragma once #ifndef CYLINDEROPTIMIZEDFEATURECALCULATOR_H #define CYLINDEROPTIMIZEDFEATURECALCULATOR_H #include #include #include #include #include #include namespace hpe { enum FeatureType { Depth, RGB }; template class CylinderOptimizedFeatureCalculator : public FeatureCalculatorBase { public: typedef PointType TPoint; typedef NormalType TNormal; typedef std::shared_ptr> Sampler; typedef pcl::PointCloud Cloud; typedef pcl::PointCloud NormalCloud; typedef pcl::search::KdTree Search; CylinderOptimizedFeatureCalculator(Sampler sampler, FeatureType featureType) : FeatureCalculatorBase(sampler), m_visualize(false), m_featureType(featureType) { } ~CylinderOptimizedFeatureCalculator(void) { } void SetFeatureSize(cv::Size size) { m_featureSize = size; } cv::Mat GetFeatures(typename Cloud::Ptr &cloud) { m_result = CreateResult(); m_featureNumber = 0; this->Calculate(cloud); return m_result; } protected: void PreCompute() {} cv::Mat CreateResult() { if (m_featureType == FeatureType::Depth) { return cv::Mat::ones(1, m_featureSize.width * m_featureSize.height, CV_32FC1) * -100; } else if (m_featureType == FeatureType::RGB) { return cv::Mat::zeros(1, m_featureSize.width * m_featureSize.height, CV_8UC3); } } void PreCompute(typename Cloud::Ptr cloud, pcl::ModelCoefficients &coefficients) { m_columns.clear(); m_rows.clear(); m_referencePoint.x = coefficients.values[0]; m_referencePoint.y = coefficients.values[1]; m_referencePoint.z = coefficients.values[2]; m_phiRange.first = coefficients.values[3]; m_phiRange.second = coefficients.values[4]; m_phiSteps = (int)coefficients.values[5]; m_yRange.first = coefficients.values[6]; m_yRange.second = coefficients.values[7]; m_ySteps = (int)coefficients.values[8]; //std::vector> indices; m_columns.resize(m_phiSteps); m_rows.resize(m_ySteps); for (int i = 0; i < cloud->points.size(); i += 1) { PointType pt = cloud->points[i]; if (boost::math::isnan(pt.x) || boost::math::isnan(pt.y) || boost::math::isnan(pt.z)) { continue; } int column = GetPointColumn(pt); if (column != -1) { if (column > 0) m_columns[column - 1].push_back(i); m_columns[column].push_back(i); if (column < m_phiSteps - 1) m_columns[column + 1].push_back(i); } int row = GetPointRow(pt); if (row != -1) { if (row > 0) m_rows[row - 1].push_back(i); m_rows[row].push_back(i); if (row < m_ySteps - 1) m_rows[row + 1].push_back(i); } } for (int i = 0; i < m_phiSteps; i += 1) { std::sort(m_columns[i].begin(), m_columns[i].end()); } for (int i = 0; i < m_ySteps; i += 1) { std::sort(m_rows[i].begin(), m_rows[i].end()); } if (m_visualize) { pcl::visualization::PCLVisualizer v("Segments"); v.addPointCloud(cloud); v.addPointCloud(cloud, "pts"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "pts"); for (int i = 0; i < m_phiSteps; i += 1) { typename Cloud::Ptr subcloud(new Cloud); pcl::copyPointCloud(*cloud, m_columns[i], *subcloud); v.updatePointCloud(subcloud, "pts"); v.spinOnce(100); } v.removeAllPointClouds(); } /*m_trees.clear(); m_trees.resize(m_phiSteps); for (int i = 0; i < m_phiSteps; i += 1) { Cloud::Ptr subcloud(new Cloud); pcl::copyPointCloud(*cloud, indices[i], *subcloud); m_trees[i].setInputCloud(subcloud); }*/ } void ComputeFeature(typename Cloud::Ptr cloud, PointType &point, NormalType &normal) { int column = GetPointColumn(point); if (column == -1) { throw HPEException("CylinderOptimizedFeatureCalculator::ComputeFeature - internal structures corrupted"); } int row = GetPointRow(point); if (column == -1) { throw HPEException("CylinderOptimizedFeatureCalculator::ComputeFeature - internal structures corrupted"); } std::vector indices = GetIntersection(column, row); if (m_visualize) { /*Cloud::Ptr subcloud(new Cloud); pcl::copyPointCloud(*treeCloud, indices, *subcloud); subcloud->push_back(point); pcl::visualization::PCLVisualizer v("sub"); v.addPointCloud(cloud); v.addPointCloud(subcloud, "1"); v.addPointCloud(treeCloud, "2"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "1"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "1"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, "2"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "2"); v.spin();*/ } if (indices.size() > 0) { std::vector distances; for (int i = 0; i < indices.size(); i += 1) { distances.push_back((point.getVector3fMap() - cloud->points[indices[i]].getVector3fMap()).norm()); } PointType interpolated; if (indices.size() == 1) { interpolated = cloud->points[indices[0]]; } else { interpolated = Interpolate(cloud, indices, distances); } if (m_featureType == FeatureType::Depth) { float depth = (interpolated.getVector3fMap() - point.getVector3fMap()).norm(); float pointProjectionRadius = (std::pow(m_referencePoint.x - point.x, 2) + std::pow(m_referencePoint.z - point.z, 2)); float interpolatedProjectionRadius = (std::pow(m_referencePoint.x - interpolated.x, 2) + std::pow(m_referencePoint.z - interpolated.z, 2)); if (pointProjectionRadius < interpolatedProjectionRadius) { depth = -depth; } m_result.at(m_featureNumber) = depth; } else if (m_featureType == FeatureType::RGB) { cv::Vec3b color = cv::Vec3b(interpolated.b, interpolated.g, interpolated.r); m_result.at(m_featureNumber) = color; } } m_featureNumber += 1; } private: int GetPointColumn(PointType pt) { float phiStep = (m_phiRange.second - m_phiRange.first) / m_phiSteps; float angle = std::atan2(pt.z - m_referencePoint.z, pt.x - m_referencePoint.x); int segment = (angle - m_phiRange.first - phiStep / 2) / phiStep; if (segment > m_phiSteps - 1 || segment < 0) { return -1; } return segment; } int GetPointRow(PointType pt) { float yStep = (m_yRange.second - m_yRange.first) / m_ySteps; int row = (pt.y - m_yRange.first - yStep / 2) / yStep; if (row > m_ySteps - 1 || row < 0) { return -1; } return row; } std::vector GetIntersection(int column, int row) { std::vector result; std::set_intersection(m_columns[column].begin(), m_columns[column].end(), m_rows[row].begin(), m_rows[row].end(), std::back_inserter(result)); return result; } PointType Interpolate(typename Cloud::Ptr &cloud, std::vector &indices, std::vector &distances) { float totalDistance = 0; for (int i = 0; i < distances.size(); i += 1) { totalDistance += distances[i]; } PointType result; int n = indices.size(); for (int i = 0; i < n; i++) { PointType p = cloud->at(indices[i]); float currentDistance = distances[i]; float c = (totalDistance - currentDistance) / ((n - 1) * totalDistance); result.x += c * p.x; result.y += c * p.y; result.z += c * p.z; result.r += c * (float)p.r; result.g += c * (float)p.g; result.b += c * (float)p.b; } return result; } std::vector> m_columns; std::vector> m_rows; std::vector> m_trees; pcl::PointXYZ m_referencePoint; std::pair m_phiRange; int m_phiSteps; std::pair m_yRange; int m_ySteps; bool m_visualize; cv::Mat m_result; cv::Size m_featureSize; int m_featureNumber; FeatureType m_featureType; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/FeatureCalculatorBase.h ================================================ #pragma once #ifndef FEATURE_CALCULATOR_BASE_H #define FEATURE_CALCULATOR_BASE_H #include #include #include #ifndef EIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET #define EIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET #endif #include #include #include #include #include #include namespace hpe { template class FeatureCalculatorBase { public: void SetLandmarks(const typename Common::Landmarks &landmarks) { m_landmarks = landmarks; } protected: typedef PointType TPoint; typedef NormalType TNormal; typedef std::shared_ptr> Sampler; typedef pcl::PointCloud Cloud; typedef pcl::PointCloud NormalCloud; typedef pcl::search::KdTree Search; FeatureCalculatorBase(Sampler sampler) : m_sampler(sampler) { } void Calculate(typename Cloud::Ptr &cloud) { pcl::ModelCoefficients coefficients; if (m_landmarks.size() != 0) { m_featurePoints = m_sampler->SampleCloud(cloud, m_landmarks, coefficients); m_landmarks.clear(); } else { m_featurePoints = m_sampler->SampleCloud(cloud, coefficients); } auto normalCloud = m_featurePoints; int numberOfSamples = m_featurePoints->size(); m_featureNumber = 0; if (coefficients.values.size() == 0) { PreCompute(); } else { PreCompute(cloud, coefficients); } for (int i = 0; i < numberOfSamples; i++) { auto gridPoint = m_featurePoints->at(i); PointType point; NormalType normal; if ((boost::math::isnan(gridPoint.x) || boost::math::isnan(gridPoint.y) || boost::math::isnan(gridPoint.z)) == false) { // TODO make both usages: int samplePointIndex = i; //GetReferencePoint(gridPoint); point = m_featurePoints->at(samplePointIndex); normal = normalCloud->at(samplePointIndex); } ComputeFeature(cloud, point, normal); m_featureNumber++; } m_featureNumber = 0; } int GetReferencePoint(PointType &point) { std::vector indices; std::vector distances; m_search->nearestKSearch(point, 1, indices, distances); return indices[0]; } int GetFeatureNumber() { return m_featureNumber; } int GetFeatureCount() { return m_featurePoints->size(); } virtual void ComputeFeature(typename Cloud::Ptr cloud, PointType &point, NormalType &normal) = 0; virtual void PreCompute() = 0; virtual void PreCompute(typename Cloud::Ptr cloud, pcl::ModelCoefficients &coefficients) {} typename Search::Ptr m_search; Sampler m_sampler; private: int m_featureNumber; typename Common::Landmarks m_landmarks; typename Cloud::Ptr m_featurePoints; }; } #endif //FEATURE_CALCULATOR_BASE_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Calculator/IFeatureCalculator.h ================================================ #pragma once #ifndef I_FEATURE_CALCULATOR_H #define I_FEATURE_CALCULATOR_H #include "Interface/IInterface.h" namespace hpe { template class IFeatureCalculator : public IInterface { public: virtual cv::Mat GetFeatures(typename pcl::PointCloud::Ptr &cloud) = 0; }; } #endif //I_FEATURE_CALCULATOR_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/IFeature.h ================================================ #pragma once #ifndef I_FEATURE_H #define I_FEATURE_H #include #include #include "Interface/IInterface.h" namespace hpe { template class IFeature : public IInterface { public: virtual cv::Mat Compute(typename pcl::PointCloud::Ptr &cloud, PointType &point, NormalType &normal) = 0; virtual cv::Mat EmptyFeature() = 0; }; } #endif //I_FEATURE_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/CylinderSampler.cpp ================================================ #define _CRT_SECURE_NO_WARNINGS #include #include "CylinderSampler.h" #include #include #include #include #include #include #include #include using namespace boost::assign; namespace hpe { CylinderSampler::CylinderSampler(Range phiRange, Range zRange, int topRows, int bottomRows, int sampleColumns) : m_phiRange(phiRange), m_zRange(zRange), m_topRows(topRows), m_bottomRows(bottomRows), m_visualize(false), m_sampleColumns(sampleColumns), m_supportOptimizedSampling(false) { //m_rightEyeIndices += 0, 4; //m_leftEyeIndices += 8, 12; //m_rightEyeIndices += 0; //m_rightEyeIndices += 0, 1, 2, 3, 4, 5, 6, 7; //m_leftEyeIndices += 1; //m_leftEyeIndices += 8, 9, 10, 11, 12, 13, 14, 15; m_leftEyeIndices += 0; m_rightEyeIndices += 1; } CylinderSampler::~CylinderSampler(void) { } pcl::PointCloud::Ptr CylinderSampler::SampleCloud(pcl::PointCloud::Ptr &cloud) { CloudType::Ptr res(new CloudType); return res; } pcl::PointCloud::Ptr CylinderSampler::SampleCloud(pcl::PointCloud::Ptr &cloud, const Common::Landmarks &landmarks, pcl::ModelCoefficients &coefficients) { CloudType::Ptr res(new CloudType); PointType leftEye = MeanPoint(landmarks, m_leftEyeIndices); PointType rightEye = MeanPoint(landmarks, m_rightEyeIndices); Eigen::Vector3f betweenEyeVector = leftEye.getArray3fMap() - rightEye.getArray3fMap(); float eyeDistance = betweenEyeVector.norm(); pcl::ModelCoefficients cylinder = ComputeCylinder(rightEye, leftEye); Eigen::Vector3f direction; direction << 0, 0, 1; // Works only when the cloud is oriented along OZ Range phiRange(CV_PI * m_phiRange.first, CV_PI * m_phiRange.second); float eyeY = cylinder.values[1]; float maxY = eyeY + m_zRange.second * eyeDistance; float minY = eyeY - m_zRange.first * eyeDistance; int totalRows = m_bottomRows + m_topRows; float stepSize = (maxY - minY) / totalRows; std::vector samplingYs; float samplingYStart = eyeY + (m_topRows - 1) * stepSize; for (int i = 0; i < totalRows; i++) { samplingYs.push_back(samplingYStart); samplingYStart -= stepSize; } CloudType::Ptr samplingCylinderTop = GenerateSamplingCloud(cylinder, direction, phiRange, m_sampleColumns, samplingYs); coefficients.values.clear(); if (m_supportOptimizedSampling) { coefficients.values.push_back(cylinder.values[0]); coefficients.values.push_back(cylinder.values[1]); coefficients.values.push_back(cylinder.values[2]); coefficients.values.push_back(phiRange.first); coefficients.values.push_back(phiRange.second); coefficients.values.push_back(m_sampleColumns); coefficients.values.push_back(eyeY + (m_topRows - 1) * stepSize); coefficients.values.push_back(eyeY - m_bottomRows * stepSize); coefficients.values.push_back(totalRows); } if (m_visualize) { std::cout << samplingCylinderTop->size() << std::endl; pcl::PointCloud::Ptr rgbCloud(new pcl::PointCloud); pcl::copyPointCloud(*cloud, *rgbCloud); CloudType::Ptr eyes(new CloudType); eyes->push_back(leftEye); eyes->push_back(rightEye); std::vector eyeYs; eyeYs.push_back(eyeY); auto eyeLine = GenerateSamplingCloud(cylinder, direction, phiRange, m_sampleColumns, eyeYs); std::vector cameras; pcl::visualization::PCLVisualizer vEyes("Eyes"); vEyes.addPointCloud(rgbCloud, "c"); vEyes.addPointCloud(eyes, "e"); vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "e"); vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "e"); vEyes.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, "c"); vEyes.setBackgroundColor(1, 1, 1); vEyes.spin(); vEyes.getCameras(cameras); pcl::visualization::PCLVisualizer v(""); v.addPointCloud(rgbCloud, "c"); v.addPointCloud(samplingCylinderTop, "cyl"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, "cyl"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 1, 1, "cyl2"); v.addCoordinateSystem(); v.addPointCloud(eyeLine, "EyeLine"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "EyeLine"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "EyeLine"); v.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, "c"); v.setBackgroundColor(1, 1, 1); v.setCameraParameters(cameras[0]); v.spin(); } return samplingCylinderTop; } pcl::ModelCoefficients CylinderSampler::ComputeCylinder(PointType p1, PointType p2) { pcl::ModelCoefficients result; Eigen::Vector3f distanceVector = p1.getArray3fMap() - p2.getArray3fMap(); float distance = distanceVector.norm(); PointType middlePoint; middlePoint.getArray3fMap() = p1.getVector3fMap() - distanceVector / 2; float r = distance * 1.2; float rsq = r * r; float offset = std::sqrt(distance * distance - (distance / 2) * (distance / 2)); float a = distance / 2; float hsq = rsq - a * a; Eigen::Vector3f P2 = p1.getArray3fMap() + a * (distance / 2); float x = P2[0] + sqrt(hsq) * (p2.y - p1.y) / distance; float y = P2[1] - sqrt(hsq) * (p2.x - p1.x) / distance; middlePoint.z -= offset; result.values.resize(7); result.values[0] = middlePoint.x; result.values[1] = middlePoint.y; result.values[2] = middlePoint.z; result.values[3] = 0; result.values[4] = 1; result.values[5] = 0; result.values[6] = r; return result; } CylinderSampler::PointType CylinderSampler::MeanPoint(const Common::Landmarks &landmarks, const std::vector &indices) { PointType res; Eigen::Vector3f result; result << 0, 0, 0; for (int i = 0; i < indices.size(); i++) { Eigen::Vector3f p = landmarks[indices[i]].point.getArray3fMap(); result += p; } result /= indices.size(); res.getArray3fMap() = result; return res; } CylinderSampler::CloudType::Ptr CylinderSampler::GenerateSamplingCloud(pcl::ModelCoefficients cylinder, Eigen::Vector3f direction, Range phiRange, int phiSteps, std::vector samplingYs) { float radius = cylinder.values[6]; float phi = phiRange.first; float phiStep = (phiRange.second - phiRange.first) / phiSteps; CloudType::Ptr result(new CloudType); for (int currentPhi = 0; currentPhi < phiSteps; ++currentPhi) { PointType p; p.x = radius * std::cos(phi); p.z = radius * std::sin(phi); Eigen::Vector3f normalVector; normalVector << p.x, p.y, p.z; normalVector /= normalVector.norm(); p.normal_x = -normalVector[0]; p.normal_y = -normalVector[1]; p.normal_z = -normalVector[2]; for (int currentY = 0; currentY < samplingYs.size(); ++currentY) { PointType p2 = p; p2.y = samplingYs[currentY]; result->push_back(p2); } phi += phiStep; } Eigen::Quaternionf rotation; Eigen::Vector3f sourceVector; sourceVector << 0, 0, 1; rotation.setFromTwoVectors(sourceVector, direction); Eigen::Matrix4f transofm = Eigen::Matrix4f::Identity(); Eigen::Vector3f shift; shift << cylinder.values[0], 0, cylinder.values[2]; transofm.block<3, 1>(0, 3) = shift; transofm.block<3, 3>(0, 0) = rotation.toRotationMatrix(); pcl::transformPointCloud(*result, *result, transofm); PointType p13; p13.getArray3fMap() << cylinder.values[0], 0, cylinder.values[2]; return result; } Eigen::Vector3f CylinderSampler::GetDirection(pcl::PointCloud::Ptr &cloud, const Common::Landmarks &landmarks) { pcl::NormalEstimation ne; ne.setInputCloud(cloud); pcl::search::KdTree::Ptr tree(new pcl::search::KdTree ()); ne.setSearchMethod(tree); pcl::PointCloud::Ptr cloud_normals(new pcl::PointCloud); ne.setRadiusSearch(0.3); ne.compute(*cloud_normals); pcl::Normal normal = cloud_normals->points[landmarks[m_noseTipIndex].index]; Eigen::Vector3f direction; direction << normal.normal_x, normal.normal_y, normal.normal_z; return direction; } void CylinderSampler::SetVisualize(bool visualizeSampler) { m_visualize = visualizeSampler; } void CylinderSampler::SetEyeIndices(std::vector &leftEyeIndices, std::vector &rightEyeIndices) { m_leftEyeIndices = leftEyeIndices; m_rightEyeIndices = rightEyeIndices; } void CylinderSampler::SetSupportOptimizedSampling(bool support) { m_supportOptimizedSampling = support; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/CylinderSampler.h ================================================ #pragma once #include #include #ifndef CYLINDERSAMPLER_H #define CYLINDERSAMPLER_H namespace hpe { class CylinderSampler : public IPatchSampler { public: typedef std::pair Range; typedef pcl::PointXYZRGBNormal PointType; typedef pcl::PointCloud CloudType; CylinderSampler(Range phiRange, Range zRange, int topRows, int bottomRows, int sampleColumns); ~CylinderSampler(void); pcl::PointCloud::Ptr SampleCloud(pcl::PointCloud::Ptr &cloud); pcl::PointCloud::Ptr SampleCloud(pcl::PointCloud::Ptr &cloud, const Common::Landmarks &landmarks, pcl::ModelCoefficients &coefficients); void SetVisualize(bool visualizeSampler); void SetEyeIndices(std::vector &leftEyeIndices, std::vector &rightEyeIndices); void SetSupportOptimizedSampling(bool support); private: pcl::ModelCoefficients ComputeCylinder(PointType p1, PointType p2); PointType MeanPoint(const Common::Landmarks &landmarks, const std::vector &indices); CloudType::Ptr GenerateSamplingCloud(pcl::ModelCoefficients cylinder, Eigen::Vector3f direction, Range phiRange, int phiSteps, std::vector samplingYs); Eigen::Vector3f GetDirection(pcl::PointCloud::Ptr &cloud, const Common::Landmarks &landmarks); std::vector m_leftEyeIndices; std::vector m_rightEyeIndices; int m_noseTipIndex; bool m_visualize; bool m_supportOptimizedSampling; Range m_phiRange; Range m_zRange; int m_topRows; int m_bottomRows; int m_sampleColumns; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Features/Sampler/IPatchSampler.h ================================================ #pragma once #pragma once #ifndef I_PATCH_SAMPLER #define I_PATCH_SAMPLER #include #include #include #include namespace hpe { template class IPatchSampler : public IInterface { public: virtual typename pcl::PointCloud::Ptr SampleCloud(typename pcl::PointCloud::Ptr &cloud) = 0; virtual typename pcl::PointCloud::Ptr SampleCloud(typename pcl::PointCloud::Ptr &cloud, const typename Common::Landmarks &landmarks) { return SampleCloud(cloud); } virtual typename pcl::PointCloud::Ptr SampleCloud(typename pcl::PointCloud::Ptr &cloud, pcl::ModelCoefficients &coefficients) { coefficients.values.clear(); return SampleCloud(cloud); } virtual typename pcl::PointCloud::Ptr SampleCloud(typename pcl::PointCloud::Ptr &cloud, const typename Common::Landmarks &landmarks, pcl::ModelCoefficients &coefficients) { return SampleCloud(cloud, coefficients); } }; } #endif //I_PATCH_SAMPLER ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/BoxFilter.h ================================================ #ifndef BOXFILTER_H #define BOXFILTER_H #include "Filter/ICloudFilter.h" #include "Filter/Filters.h" namespace hpe { /** \class BoxFilter \brief A filter that cuts a box inside a point cloud. \author Sergey \date 8/11/2015 \tparam PointType Point type of the cloud. */ template class BoxFilter : public ICloudFilter { public: typedef std::shared_ptr Ptr; //TODO consider using pcl::BoxClipper3D here (maybe) BoxFilter(float xFrom, float xTo, float yFrom, float yTo, float zFrom, float zTo) : m_xFrom(xFrom), m_xTo(xTo), m_yFrom(yFrom), m_yTo(yTo), m_zFrom(zFrom), m_zTo(zTo) { } typename pcl::PointCloud::Ptr Filter(typename pcl::PointCloud::Ptr &cloud) { auto x = std::minmax(m_xFrom, m_xTo); auto c1 = PassThrough(cloud, x.first, x.second, "x"); auto y = std::minmax(m_yFrom, m_yTo); auto c2 = PassThrough(c1, y.first, y.second, "y"); auto z = std::minmax(m_zFrom, m_zTo); auto c3 = PassThrough(c2, z.first, z.second, "z"); return c3; } private: float m_xFrom; float m_xTo; float m_yFrom; float m_yTo; float m_zFrom; float m_zTo; }; } #endif // BOXFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/CenteredBoxFilter.h ================================================ #ifndef CENTEREDBOXFILTER_H #define CENTEREDBOXFILTER_H #include "Filter/ICenterCloudFilter.h" #include "Filter/Filters.h" #include namespace hpe { template class CenteredBoxFilter : public ICenterCloudFilter { public: typedef pcl::PointCloud Cloud; CenteredBoxFilter(float sizeX, float sizeY, float sizeZ) : m_sizeX(sizeX / 2), m_sizeY(sizeY / 2), m_sizeZ(sizeZ / 2) { } typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud, float x, float y, float z) { float xFrom = x - m_sizeX; float xTo = x + m_sizeX; auto result = PassThrough(cloud, xFrom, xTo, "x", cloud->isOrganized()); float yFrom = y - m_sizeY; float yTo = y + m_sizeY; result = PassThrough(result, yFrom, yTo, "y", cloud->isOrganized()); float zFrom = z - m_sizeZ; float zTo = z + m_sizeZ; result = PassThrough(result, zFrom, zTo, "z", cloud->isOrganized()); return result; } private: float m_sizeX; float m_sizeY; float m_sizeZ; }; } #endif // CENTEREDBOXFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/FilteringQueue.h ================================================ #pragma once #ifndef FILTERINGQUEUE_H #define FILTERINGQUEUE_H #include #include #include #include #include "Filter/ICloudFilter.h" #include "Filter/FunctorFilter.h" #include namespace hpe { /** \class FilteringQueue \brief A queue of ICloudFilter objects that has ICloudFilter interface. \author Sergey \date 8/11/2015 \tparam PointType Type of the point type. */ template class FilteringQueue : public ICloudFilter { public: typedef std::shared_ptr Ptr; typedef std::shared_ptr> InputFilterPtr; FilteringQueue() {} void AddFilter(std::shared_ptr> filter) { m_filters.push_back(filter); } void AddFilterFunctor(typename FunctorFilter::Functor functor) { typename FunctorFilter::Ptr filter(new FunctorFilter(functor)); m_filters.push_back(filter); } typename pcl::PointCloud::Ptr Filter(typename pcl::PointCloud::Ptr &cloud) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); pcl::copyPointCloud(*cloud, *result); for (auto it = m_filters.begin(); it != m_filters.end(); ++it) { result = (*it)->Filter(result); } return result; } private: std::vector>> m_filters; }; } #endif // FILTERINGQUEUE_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/Filters.h ================================================ #ifndef FILTERS_H #define FILTERS_H #include #include #include #include #include #include #include #include "UI/ShowCloud.h" namespace hpe { template typename pcl::PointCloud::Ptr SortPoints(typename pcl::PointCloud::Ptr cloud/*, std::function comparer*/) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); pcl::copyPointCloud(*cloud, *result); std::sort(result->begin(), result->end(), [](PointT a, PointT b) -> bool { float value = 100000; return a.y * value + a.x > b.y * value + b.x; }); return result; } template typename pcl::PointCloud::Ptr ComputeNormals(typename pcl::PointCloud::Ptr &cloud, int kSearch = 30, int radiusSearch = 0) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); typedef pcl::search::KdTree Search; typename Search::Ptr search(new Search); pcl::NormalEstimation estimation; estimation.setSearchMethod(search); if (kSearch != 0) { estimation.setKSearch(kSearch); } else if (radiusSearch != 0) { estimation.setRadiusSearch(radiusSearch); } else { throw HPEException("ComputeNormals : Specify either kSearch or radiusSearch"); } estimation.setInputCloud(cloud); estimation.compute(*result); return result; } template typename pcl::PointCloud::Ptr Voxelize(typename pcl::PointCloud::Ptr cloud, float leafSize) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); pcl::VoxelGrid grid; grid.setLeafSize(leafSize, leafSize, leafSize); grid.setInputCloud(cloud); grid.filter(*result); return result; } template typename pcl::PointCloud::Ptr RemoveOutliersByDistance(typename pcl::PointCloud::Ptr cloud, int numberOfNeighborsToAnalyze, int stdThreshold) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); pcl::StatisticalOutlierRemoval outlierRemoval; outlierRemoval.setInputCloud(cloud); outlierRemoval.setMeanK(numberOfNeighborsToAnalyze); outlierRemoval.setStddevMulThresh(stdThreshold); outlierRemoval.filter(*result); return result; } template typename pcl::PointCloud::Ptr PassThrough(typename pcl::PointCloud::Ptr &cloud, float from, float to, std::string filterName, bool keepOrganzed = true) { typename pcl::PointCloud::Ptr result(new typename pcl::PointCloud); pcl::PassThrough filter; filter.setFilterLimits(from, to); if (cloud->isOrganized()) { filter.setKeepOrganized(keepOrganzed); } filter.setFilterFieldName(filterName); filter.setInputCloud(cloud); filter.filter(*result); return result; } template typename pcl::PointCloud::Ptr ResampleMLS(typename pcl::PointCloud::Ptr &cloud, int polynomialOrder = 2, float searchRadius = 0.005) { typename pcl::PointCloud::Ptr result(new pcl::PointCloud); typename pcl::search::KdTree::Ptr search(new pcl::search::KdTree); pcl::MovingLeastSquares mls; mls.setInputCloud(cloud); mls.setPolynomialFit(true); mls.setPolynomialOrder(polynomialOrder); mls.setSearchMethod(search); mls.setSearchRadius(searchRadius); mls.process(*result); return result; } } #endif // FILTERS_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/FunctorFilter.h ================================================ #ifndef FUNCTORFILTER_H #define FUNCTORFILTER_H #include #include #include "Filter/ICloudFilter.h" namespace hpe { /** \class FunctorFilter \brief A filter that performes the operation defined inside a functor. Clearly the functor should conform to Functor format. \author Sergey \date 8/11/2015 \tparam PointType Type of the point type. */ template class FunctorFilter : public ICloudFilter { public: typedef typename std::function::Ptr(typename pcl::PointCloud::Ptr &)> Functor; typedef std::shared_ptr Ptr; FunctorFilter(Functor functor) : m_functor(functor) { } typename pcl::PointCloud::Ptr Filter(typename pcl::PointCloud::Ptr &cloud) { return m_functor(cloud); } private: Functor m_functor; }; } #endif // FUNCTORFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/ICenterCloudFilter.h ================================================ #ifndef ICENTERCLOUDFILTER_H #define ICENTERCLOUDFILTER_H #include #include #include namespace hpe { template class ICenterCloudFilter : public IInterface { public: typedef pcl::PointCloud Cloud; virtual typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud, float x, float y, float z) = 0; }; } #endif // ICENTERCLOUDFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/ICloudFilter.h ================================================ #ifndef ICLOUDFILTER_H #define ICLOUDFILTER_H #include #include #include namespace hpe { /** \class ICloudFilter \brief A generic interface that takes a cloud and produces another cloud. \author Sergey \date 8/11/2015 \tparam PointType Type of the point type. */ template class ICloudFilter : public IInterface { public: typedef pcl::PointCloud Cloud; virtual typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud) = 0; }; } #endif // ICLOUDFILTER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Filter/MovingLeastSquaresFilter.h ================================================ #pragma once #ifndef MOVINGLEASTSQUARESFILTER_H #define MOVINGLEASTSQUARESFILTER_H #include #include //#define PCL_NO_PRECOMPILE #include //#undef PCL_NO_PRECOMPILE namespace hpe { template class MovingLeastSquaresFilter : public ICloudFilter { public: typedef typename ICloudFilter::Cloud Cloud; typename Cloud::Ptr Filter(typename Cloud::Ptr &cloud) { typename Cloud::Ptr result(new Cloud); typename pcl::search::KdTree::Ptr search(new pcl::search::KdTree); typename pcl::MovingLeastSquares mls; search->setInputCloud(cloud); mls.setInputCloud(cloud); mls.setPolynomialFit(true); mls.setPolynomialOrder(3); mls.setSearchMethod(search); mls.setSearchRadius(0.015); mls.setUpsamplingMethod(mls.VOXEL_GRID_DILATION); mls.setDilationVoxelSize(0.001); mls.process(*result); return result; } }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/GrabberBase.cpp ================================================ #include "GrabberBase.h" #include #include namespace hpe { GrabberBase::GrabberBase() { } void GrabberBase::Start() { PreRun(); for (auto p = m_processors.begin(); p != m_processors.end(); p++) { IProcessor::Ptr processor = *p; processor->Init(); } m_run = true; while (m_run) { //TODO make thread safe for (auto p = m_processorsToAdd.begin(); p != m_processorsToAdd.end(); p++) { IProcessor::Ptr processor = *p; processor->Init(); m_processors.push_back(processor); } m_processorsToAdd.clear(); RawFrames::Ptr rawFrames(new RawFrames); bool success = GetNextFrame(rawFrames->colorFrame, rawFrames->depthFrame); if (success == false) { break; } IDataStorage::Ptr storage = CreateDataStorage(); storage->Set("RawFrames", rawFrames); for (auto p = m_processors.begin(); p != m_processors.end(); p++) { IProcessor::Ptr processor = *p; processor->Process(storage); } } for (auto p = m_processors.begin(); p != m_processors.end(); p++) { IProcessor::Ptr processor = *p; processor->Cleanup(); } } void GrabberBase::Stop() { m_run = false; } void GrabberBase::AddProcessor(IProcessor::Ptr processor) { m_processorsToAdd.push_back(processor); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/GrabberBase.h ================================================ #pragma once #ifndef GRABBERBASE_H #define GRABBERBASE_H #include #include #include #include #include namespace hpe { /** \class GrabberBase \brief A generic component to decouple technical sensor details from the recognition framework. Typically it should be subclassed for any sensor. Any grabber takes a sequence of processors to that perform a desired functionality. Basically, recognition methods do not know anything about processors and grabbers, they just do recognition. Grabber doesn't know anything about recognition methods, but knows how to call processors, which in turn do not know anything about technical details of the sensor, but rather accept the data from the grabber, process it, and pass forward. \author Sergey \date 8/11/2015 */ class GrabberBase { public: GrabberBase(); void Start(); void Stop(); void AddProcessor(IProcessor::Ptr processor); protected: virtual bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) = 0; virtual IDataStorage::Ptr CreateDataStorage() = 0; virtual void PreRun() = 0; private: bool m_run; std::vector m_processors; std::vector m_processorsToAdd; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/OpenNIGrabber.cpp ================================================ #include "OpenNIGrabber.h" #include namespace hpe { OpenNIGrabber::OpenNIGrabber(void) { } OpenNIGrabber::~OpenNIGrabber(void) { } void OpenNIGrabber::PreRun() { m_openniGrabber = std::shared_ptr(new pcl::OpenNIGrabber()); m_haveFrames = false; boost::function&, const boost::shared_ptr&, float)> f = boost::bind(&OpenNIGrabber::GrabberCallback, this, _1, _2, _3); m_openniGrabber->registerCallback(f); m_openniGrabber->start(); } hpe::IDataStorage::Ptr OpenNIGrabber::CreateDataStorage() { return IDataStorage::Ptr(new MapDataStorage); } cv::Mat OpenNIGrabber::GetColorFrame(const boost::shared_ptr &img) { cv::Mat frameRGB = cv::Mat(img->getHeight(), img->getWidth(), CV_8UC3); img->fillRGB(frameRGB.cols, frameRGB.rows, frameRGB.data, frameRGB.step); cv::Mat frameBGR; cv::cvtColor(frameRGB, frameBGR, cv::COLOR_RGB2BGR); return frameBGR; } cv::Mat OpenNIGrabber::GetDepthFrame(const boost::shared_ptr &depthImg) { cv::Mat frameDepth = cv::Mat(depthImg->getHeight(), depthImg->getWidth(), CV_32FC1); depthImg->fillDepthImage(frameDepth.cols, frameDepth.rows, (float *)frameDepth.data, frameDepth.step); return frameDepth; } void OpenNIGrabber::GrabberCallback(const boost::shared_ptr &color, const boost::shared_ptr &depth, float c) { (void)c; cv::Mat colorMat = GetColorFrame(color); cv::Mat depthMat = GetDepthFrame(depth); colorMat.copyTo(m_colorFrame); depthMat.copyTo(m_depthFrame); m_haveFrames = true; } bool OpenNIGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) { while (m_haveFrames == false) { boost::this_thread::sleep(boost::posix_time::milliseconds(10)); } m_haveFrames = false; m_colorFrame.copyTo(colorFrame); m_depthFrame.copyTo(depthFrame); return true; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/OpenNIGrabber.h ================================================ #pragma once #ifndef OPENNIGRABBER_H #define OPENNIGRABBER_H #include #include #include #include #include namespace hpe { class OpenNIGrabber : public GrabberBase { public: OpenNIGrabber(void); ~OpenNIGrabber(void); protected: void PreRun(); hpe::IDataStorage::Ptr CreateDataStorage(); bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame); private: cv::Mat GetColorFrame(const boost::shared_ptr &img); cv::Mat GetDepthFrame(const boost::shared_ptr &depthImg); void GrabberCallback(const boost::shared_ptr &color, const boost::shared_ptr &depth, float c); std::shared_ptr m_openniGrabber; cv::Mat m_colorFrame; cv::Mat m_depthFrame; bool m_haveFrames; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/PCDGrabber.cpp ================================================ #include "PCDGrabber.h" #include #include #include #include PCDGrabber::PCDGrabber(void) : m_frameNumber(0), m_endNumber(100), m_storeFolder("clouds") { } PCDGrabber::PCDGrabber(int start, int end) : m_frameNumber(start), m_endNumber(end), m_storeFolder("clouds") { } PCDGrabber::PCDGrabber(std::string folder) : m_frameNumber(0), m_endNumber(0), m_storeFolder(folder) { } PCDGrabber::PCDGrabber(int start, int end, std::string folder) : m_frameNumber(start), m_endNumber(end), m_storeFolder(folder) { } PCDGrabber::~PCDGrabber(void) { } void PCDGrabber::PreRun() { m_frameNumber = 1; EnumerateFiles(); } hpe::IDataStorage::Ptr PCDGrabber::CreateDataStorage() { using namespace hpe; IDataStorage::Ptr result(new MapDataStorage); CloudXYZRGBA::Ptr cloudObj(new CloudXYZRGBA(true)); std::string path = m_pcdFiles[m_frameNumber]; m_frameNumber += 1; pcl::io::loadPCDFile(path, *(cloudObj->cloud)); result->Set("Cloud", cloudObj); CloudFileInfo::Ptr fileinfo(new CloudFileInfo); fileinfo->Filename = path; result->Set("FileInfo", fileinfo); return result; } bool PCDGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) { return m_frameNumber < m_pcdFiles.size(); } struct SortNumerically { template T st2num(const std::string &Text) { std::stringstream ss(Text); T result; return ss >> result ? result : 0; } bool operator()(const std::string &a, const std::string &b) { namespace fs = boost::filesystem; fs::path p1 = fs::path(a); fs::path p2 = fs::path(b); int v1 = st2num(p1.stem().string()); int v2 = st2num(p2.stem().string()); return v1 < v2; } }; void PCDGrabber::EnumerateFiles() { namespace fs = boost::filesystem; fs::path rootFolder(m_storeFolder); if (fs::exists(rootFolder)) { if (fs::is_directory(rootFolder)) { auto it = fs::directory_iterator(rootFolder); for (; it != fs::directory_iterator(); it++) { auto fileName = *it; if (fs::is_regular(fileName)) { auto path = fileName.path(); fs::path extension = path.extension(); auto filename = path.filename().string(); std::string p = path.string(); if (extension == ".pcd") { m_pcdFiles.push_back(p); } } } } } SortNumerically s; std::sort(m_pcdFiles.begin(), m_pcdFiles.end(), s); } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/PCDGrabber.h ================================================ #pragma once #include #include class PCDGrabber : public hpe::GrabberBase { public: PCDGrabber(void); PCDGrabber(int start, int end); PCDGrabber(int start, int end, std::string folder); PCDGrabber(std::string folder); ~PCDGrabber(void); class CloudFileInfo : public hpe::IDataObject { public: typedef std::shared_ptr Ptr; std::string Filename; }; protected: void PreRun(); hpe::IDataStorage::Ptr CreateDataStorage(); bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame); private: void EnumerateFiles(); std::vector m_pcdFiles; std::string m_storeFolder; int m_frameNumber; int m_endNumber; }; ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/ProviderGrabber.cpp ================================================ #include "ProviderGrabber.h" #include #include namespace hpe { ProviderGrabber::ProviderGrabber(void) : m_currentFrame(0) { } ProviderGrabber::~ProviderGrabber(void) { } bool ProviderGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) { if (m_currentFrame >= m_provider->GetFrameCount()) { return false; } colorFrame = m_provider->GetBgr(m_currentFrame); depthFrame = m_provider->GetDepth(m_currentFrame); m_currentFrame += 1; return true; } IDataStorage::Ptr ProviderGrabber::CreateDataStorage() { IDataStorage::Ptr result(new MapDataStorage); return result; } void ProviderGrabber::SetProvider(std::shared_ptr provider) { m_provider = provider; } void ProviderGrabber::PreRun() { if (m_provider.get() == nullptr) { throw HPEException("ProviderGrabber::PreRun - m_provider == nullptr"); } } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Grabber/ProviderGrabber.h ================================================ #pragma once #ifndef PROVIDERGRABBER_H #define PROVIDERGRABBER_H #include #include #include namespace hpe { /** \class ProviderGrabber \brief A grabber that imitates usual grabber functionality, but read the frames from the the Provider, but not a sensor. \author Sergey \date 8/11/2015 */ class ProviderGrabber : public GrabberBase { public: typedef IDataProvider ProviderType; ProviderGrabber(void); ~ProviderGrabber(void); void SetProvider(std::shared_ptr provider); protected: bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame); IDataStorage::Ptr CreateDataStorage(); void PreRun(); private: std::shared_ptr m_provider; int m_currentFrame; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/HeadPoseEstimationFramework.vcxproj ================================================  Debug Win32 Debug x64 Release Win32 Release x64 {A46EAD61-34BD-4EFA-B545-B8FD8C2C3BBD} Win32Proj HeadPoseEstimationFramework Application true Unicode StaticLibrary true Unicode Application false true Unicode StaticLibrary false true Unicode true true $(ProjectName)_$(Configuration) false false $(ProjectName)_$(Configuration) Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) true false Console true Level3 MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true true true Level3 MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) true Console true true true ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Helpers/HpeHelpers.cpp ================================================ #include "HpeHelpers.h" namespace hpe { Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v) { Eigen::Vector3f normed = v / v.norm(); float x = normed(0); float y = normed(1); float z = normed(2); Eigen::Vector3f result; result(0) = std::atan2(z, x); // yaw result(1) = std::atan2(y, z); // tilt result(2) = std::atan(y / x); // roll return result; } Eigen::Vector3f EulerAnglesToVector(double yaw, double tilt, bool inDegrees) { if (inDegrees) { yaw = yaw / 180 * M_PI; tilt = tilt / 180 * M_PI; } Eigen::Vector3f newVector; double yawTan = std::tan(yaw); double tiltTan = std::tan(tilt); if (yawTan != 0) { newVector << 1 / yawTan, -tiltTan, 1; } else { newVector << 1000, -tiltTan, 1; } return newVector / newVector.norm(); } cv::Point2f MeanPoint(std::vector points) { cv::Point2f res(0, 0); for (auto pt = points.begin(); pt != points.end(); pt++) { res.x += pt->x; res.y += pt->y; } if (points.size() != 0) { res.x /= points.size(); res.y /= points.size(); } return res; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Helpers/HpeHelpers.h ================================================ #pragma once #ifndef HPEHELPERS_H #define HPEHELPERS_H #include #include #include #include #include namespace hpe { Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v); Eigen::Vector3f EulerAnglesToVector(double yaw, double tilt, bool inDegrees = true); template Eigen::Vector3f GetNormal(typename pcl::PointCloud::Ptr cloud, int noseInCloud) { Eigen::Vector3f result; pcl::search::KdTree::Ptr search(new pcl::search::KdTree); pcl::IndicesPtr idx(new std::vector); idx->push_back(noseInCloud); pcl::NormalEstimation ne; ne.setInputCloud(cloud); ne.setIndices(idx); ne.setKSearch(50); ne.setSearchMethod(search); pcl::PointCloud::Ptr normals(new pcl::PointCloud ); ne.compute(*normals); result = normals->points[0].getNormalVector3fMap(); return result; } cv::Point2f MeanPoint(std::vector points); } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Interface/IInterface.h ================================================ #pragma once #ifndef IINTERFACE_H #define IINTERFACE_H namespace hpe { class IInterface { public: virtual ~IInterface() {} }; } #endif // IINTERFACE_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Landmarks.h ================================================ #ifndef LANDMARKS_H #define LANDMARKS_H #include #include #include #include #include #include #include #include #include #include #include #include #include #include namespace hpe { using namespace boost::assign; template void SaveLandmarks(typename Common::Landmarks landmarks, std::string filename) { std::ofstream stream(filename); std::for_each(landmarks.begin(), landmarks.end(), [&](typename Common::Landmarks::value_type l) { stream << l.index << " " << l.point.x << " " << l.point.y << " " << l.point.z << std::endl; }); } template typename Common::Landmarks LoadLandmarks(std::string filename) { typename Common::Landmarks landmarks; std::ifstream istream(filename.c_str()); std::string line; while (std::getline(istream, line)) { std::vector values; boost::split(values, line, boost::is_any_of(" \t"), boost::token_compress_on); typename Common::Landmark l; if (values[0] != "n") { l.index = boost::lexical_cast(values[0]); l.point.x = boost::lexical_cast(values[1]); l.point.y = boost::lexical_cast(values[2]); l.point.z = boost::lexical_cast(values[3]); landmarks.push_back(l); } } return landmarks; } template typename Common::Landmarks ResampleLandmarks(typename Common::Landmarks &oldLandmarks, typename pcl::PointCloud::Ptr &cloudToResample) { typename Common::Landmarks newLandmarks; for (int i = 0; i < oldLandmarks.size(); i++) { typename Common::Landmark l; auto oldLandmark = oldLandmarks.at(i); l.index = oldLandmark.index; l.point = cloudToResample->at(l.index); newLandmarks.push_back(l); } return newLandmarks; } template typename Common::Landmarks ResampleLandmarksByPosition(typename Common::Landmarks &oldLandmarks, typename pcl::PointCloud::Ptr &cloudToResample) { pcl::search::KdTree tree; tree.setInputCloud(cloudToResample); typename Common::Landmarks newLandmarks; for (int i = 0; i < oldLandmarks.size(); i += 1) { std::vector indices; std::vector distances; tree.nearestKSearch(oldLandmarks[i].point, 1, indices, distances); typename Common::Landmark newLandmark; newLandmark.index = indices[0]; newLandmark.point = cloudToResample->points[newLandmark.index]; newLandmarks.push_back(newLandmark); } return newLandmarks; } template PointType MeanLandmark(typename Common::Landmarks &landmarks) { PointType res; for (int i = 0; i < landmarks.size(); i++) { res.getVector3fMap() += landmarks[i].point.getVector3fMap(); } res.getVector3fMap() /= landmarks.size(); return res; } template typename Common::Landmarks TransformLandmarks(typename Common::Landmarks &oldLandmarks, Eigen::Matrix4f transform) { typename pcl::PointCloud::Ptr cloud(new pcl::PointCloud); for (auto it = oldLandmarks.begin(); it != oldLandmarks.end(); it++) { cloud->push_back(it->point); } pcl::transformPointCloud(*cloud, *cloud, transform); typename Common::Landmarks result; for (int i = 0; i < oldLandmarks.size(); i += 1) { typename Common::Landmark l; l.index = oldLandmarks[i].index; l.point = cloud->points[i]; result.push_back(l); } return result; } } #endif // LANDMARKS_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/ConverterProcessor.cpp ================================================ #include "ConverterProcessor.h" #include #include #include namespace hpe { ConverterProcessor::ConverterProcessor(void) : m_cloudKey("Cloud"), m_rawFramesKey("RawFrames") { } ConverterProcessor::ConverterProcessor(ConverterPtr converter) : m_cloudKey("Cloud"), m_rawFramesKey("RawFrames"), m_converter(converter) { } ConverterProcessor::~ConverterProcessor(void) { } void ConverterProcessor::Init() { if (m_converter.get() == nullptr) { throw HPEException("ConverterProcessor::Init - m_converter.get() == nullptr"); } } void ConverterProcessor::SetCloudKey(std::string key) { m_cloudKey = key; } void ConverterProcessor::SetRawFramesKey(std::string key) { m_rawFramesKey = key; } void ConverterProcessor::SetConverter(ConverterPtr converter) { m_converter = converter; } void ConverterProcessor::Process(IDataStorage::Ptr dataStorage) { IDataObject::Ptr object = dataStorage->Get(m_rawFramesKey); RawFrames::Ptr rawFrames = std::dynamic_pointer_cast(object); if (rawFrames.get() == nullptr) { throw HPEException("ConverterProcessor::Process - rawFrames.get() == nullptr"); } CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA); cloudObject->cloud = m_converter->DepthRGBToPointCloud(rawFrames->depthFrame, rawFrames->colorFrame); dataStorage->Set(m_cloudKey, cloudObject); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/ConverterProcessor.h ================================================ #pragma once #ifndef CONVERTERPROCESSOR_H #define CONVERTERPROCESSOR_H #include #include namespace hpe { /** \class ConverterProcessor \brief Processor that converts RGB-D pair into point cloud. The converter should be provided by the client. In general converter contains sensor-specific information, while converter processor is generic. The Process(...) method takes input data by using m_rawFramesKey, and puts the results to m_cloudKey \author Sergey \date 8/11/2015 */ class ConverterProcessor : public IProcessor { public: typedef std::shared_ptr ConverterPtr; ConverterProcessor(void); ConverterProcessor(ConverterPtr converter); ~ConverterProcessor(void); void Init(); void Process(IDataStorage::Ptr dataDtorage); void SetCloudKey(std::string key); void SetRawFramesKey(std::string key); void SetConverter(ConverterPtr converter); private: std::string m_cloudKey; std::string m_rawFramesKey; ConverterPtr m_converter; cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat); }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/DepthPreprocessingProcessor.cpp ================================================ #include "DepthPreprocessingProcessor.h" #include #include #include #include namespace hpe { DepthPreprocessingProcessor::DepthPreprocessingProcessor(void) : m_framesKey("RawFrames"), m_erodeSize(9), m_closingSize(20), m_gaussianSize(7), m_gaussianSigma(7), m_distanceThreshold(1.2), m_saveOriginalCloud(false) { } DepthPreprocessingProcessor::DepthPreprocessingProcessor(int erodeSize, int closingSize, int gaussianSize, float gaussianSigma, float threshold) : m_framesKey("RawFrames"), m_erodeSize(erodeSize), m_closingSize(closingSize), m_gaussianSize(gaussianSize), m_gaussianSigma(gaussianSigma), m_distanceThreshold(threshold), m_saveOriginalCloud(false) { } DepthPreprocessingProcessor::~DepthPreprocessingProcessor(void) { } void DepthPreprocessingProcessor::Process(IDataStorage::Ptr storage) { RawFrames::Ptr frames = storage->GetAndCastNotNull(m_framesKey); if (m_saveOriginalCloud) { CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA); cloudObject->cloud = m_converter->DepthRGBToPointCloud(frames->depthFrame, frames->colorFrame); storage->Set("OriginalCloud", cloudObject); } cv::Mat mask = frames->depthFrame < m_distanceThreshold; cv::Mat convolved = AndMask(frames->depthFrame, mask); cv::GaussianBlur(convolved, convolved, cv::Size(m_gaussianSize, m_gaussianSize), m_gaussianSigma, m_gaussianSigma, cv::BORDER_DEFAULT); cv::Mat mask2; cv::Mat closingKernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(m_closingSize, m_closingSize)); cv::morphologyEx(mask, mask2, cv::MORPH_CLOSE, closingKernel); cv::Mat erodeKernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(m_erodeSize, m_erodeSize)); cv::Mat realMask; cv::morphologyEx(mask2, realMask, cv::MORPH_ERODE, erodeKernel); cv::Mat bodyExtracted = AndMask(convolved, realMask); bodyExtracted.copyTo(frames->depthFrame); } cv::Mat DepthPreprocessingProcessor::AndMask(cv::Mat &matrix, cv::Mat &mat) { if (matrix.size() != mat.size()) { throw HPEException("cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat) - matrix.size() != mat.size()"); } cv::Mat result(matrix.rows, matrix.cols, matrix.type()); cv::MatIterator_ target = result.begin(); cv::MatIterator_ source = matrix.begin(); cv::MatIterator_ maskIterator = mat.begin(); for (; target != result.end(); ++target, ++source, ++ maskIterator) { if (*maskIterator != 0) { *target = *source; } else { *target = std::numeric_limits::quiet_NaN(); } } return result; } void DepthPreprocessingProcessor::SetConverter(ConverterPtr c) { m_converter = c; m_saveOriginalCloud = true; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/DepthPreprocessingProcessor.h ================================================ #pragma once #ifndef DEPTHPREPROCESSINGPROCESSOR_H #define DEPTHPREPROCESSINGPROCESSOR_H #include #include #include namespace hpe { /** \class DepthPreprocessingProcessor \brief Depth smoothing, closing holes are done here. To get an idea, have a look at the Process code. \author Sergey \date 8/11/2015 */ class DepthPreprocessingProcessor : public IProcessor { public: typedef std::shared_ptr ConverterPtr; DepthPreprocessingProcessor(void); DepthPreprocessingProcessor(int erodeSize, int closingSize, int gaussianSize, float gaussianSigma, float threshold); ~DepthPreprocessingProcessor(void); void Process(IDataStorage::Ptr storage) override; void SetConverter(ConverterPtr c); private: cv::Mat AndMask(cv::Mat &matrix, cv::Mat &mat); std::string m_framesKey; int m_gaussianSize; int m_erodeSize; int m_closingSize; float m_gaussianSigma; float m_distanceThreshold; bool m_saveOriginalCloud; ConverterPtr m_converter; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/DetectorProcessor.cpp ================================================ #include "DetectorProcessor.h" #include #include #include namespace hpe { DetectorProcessor::DetectorProcessor(void) : m_detector("DetectorData") { } DetectorProcessor::DetectorProcessor(std::string dataDir) : m_detector(dataDir) { } DetectorProcessor::~DetectorProcessor(void) { } void DetectorProcessor::Process(hpe::IDataStorage::Ptr dataStorage) { hpe::RawFrames::Ptr frames = dataStorage->GetAndCastNotNull("RawFrames", "DetectorProcessor::Process - can't find raw frames"); hpe::headPoseInfo detectorInfo = m_detector.Detect(frames->depthFrame); hpe::HeadPoseInformation::Ptr info(new hpe::HeadPoseInformation); if (detectorInfo.x != 0 && detectorInfo.y != 0) { info->DepthFrameX = detectorInfo.x; info->DepthFrameY = detectorInfo.y; float depthValue = frames->depthFrame.at(detectorInfo.y, detectorInfo.x); KinectDataConverter converter; pcl::PointXYZ pt = converter.ConvertOnePoint(detectorInfo.x, detectorInfo.y, depthValue, cv::Size(frames->depthFrame.cols, frames->depthFrame.rows)); info->WorldX = pt.x; info->WorldY = pt.y; info->WorldZ = pt.z; if (detectorInfo.tilt != -100 && detectorInfo.yaw != -100) { info->Tilt = detectorInfo.tilt; info->Yaw = detectorInfo.yaw; info->HasHeadInfo = true; } } dataStorage->Set("HeadPoseInfo", info); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/DetectorProcessor.h ================================================ #pragma once #ifndef DETECTORPROCESSOR_H #define DETECTORPROCESSOR_H #include #include namespace hpe { class DetectorProcessor : public hpe::IProcessor { public: DetectorProcessor(void); DetectorProcessor(std::string dataDir); ~DetectorProcessor(void); void Process(hpe::IDataStorage::Ptr dataStorage); private: hpe::DepthDetector m_detector; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/FacialExpressionsProcessor.cpp ================================================ #include "FacialExpressionsProcessor.h" #include #include #include #include #include namespace hpe { void FacialExpressionProcessor::Process(IDataStorage::Ptr dataStorage) { if (m_dataReady) { m_facialExpressionReady(m_ferResult); m_dataReady = false; } if (m_workerBusy == false) { CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast("OriginalCloud"); if (cloudObject.get() == nullptr) { cloudObject = dataStorage->GetAndCastNotNull("Cloud", "FacialExpressionProcessor::Process - cloud is null"); } LandmarksObject::Ptr landmarksObject = dataStorage->GetAndCastNotNull>("Landmarks", "FacialExpressionProcessor::Process - landmarks are null"); Common::Landmarks l; for (int i = 0; i < landmarksObject->landmarks.size(); i += 1) { PointT pt; pt.getVector3fMap() = landmarksObject->landmarks[i].point.getVector3fMap(); l.push_back(Common::Landmark(landmarksObject->landmarks[i].index, pt)); } m_landmarks = l; pcl::copyPointCloud(*(cloudObject->cloud), m_cloud); m_workerBusy = true; } } std::shared_ptr FacialExpressionProcessor::CreateCylindricalSampler() { /* PhiRange=0.05,0.95 ZRange=1.3,0.9 TopRows=45 BottomRows=105 SampleColumns=120 */ CylinderSampler::Range phiRange; phiRange.first = 0.05; phiRange.second = 0.95; CylinderSampler::Range zRange; zRange.first = 0.9; zRange.second = 1.3; int topRows = 45; int bottomRows = 105; int sampleColumns = 120; std::vector leftEyeIndices; leftEyeIndices.push_back(0); std::vector rightEyeIndices; rightEyeIndices.push_back(1); std::shared_ptr sampler(new CylinderSampler(phiRange, zRange, topRows, bottomRows, sampleColumns)); sampler->SetVisualize(false); sampler->SetSupportOptimizedSampling(true); sampler->SetEyeIndices(leftEyeIndices, rightEyeIndices); return sampler; } FacialExpressionProcessor::FacialExpressionProcessor() { Init("FERData"); } FacialExpressionProcessor::FacialExpressionProcessor(std::string dataFolder) { Init(dataFolder); } //called in constructors void FacialExpressionProcessor::Init(std::string dataFolder) { m_workerBusy = false; m_dataReady = false; std::string paramsDir = (boost::format("%s/params/") % dataFolder).str(); std::string forestDir = (boost::format("%s/trees/") % dataFolder).str(); fer::RV_readParamList(paramsDir, &m_ferParameters); m_forests = fer::RV_readAllForests(forestDir, m_ferParameters); m_numberOfClasses = 3; } //called before grabber starts to send frames void FacialExpressionProcessor::Init() { std::shared_ptr sampler = CreateCylindricalSampler(); m_featureCalculator = std::shared_ptr>(new hpe::CylinderOptimizedFeatureCalculator(sampler, hpe::FeatureType::Depth)); m_featureCalculator->SetFeatureSize(cv::Size(120, 150)); boost::thread t(boost::bind(&FacialExpressionProcessor::ThreadRoutine, this)); } void FacialExpressionProcessor::ThreadRoutine() { while (true) { while (m_workerBusy == false) { boost::this_thread::sleep(boost::posix_time::milliseconds(10)); } auto l = m_landmarks; pcl::PointCloud::Ptr cloud(new pcl::PointCloud); Eigen::Vector3f zAxis; zAxis << 0, 0, 1; Eigen::Vector3f normal = l[3].point.getVector3fMap() - l[2].point.getVector3fMap(); Eigen::Quaternionf rotation; rotation.setFromTwoVectors(normal, zAxis); Eigen::Matrix4f transformRotation = Eigen::Matrix4f::Identity(); transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix(); pcl::transformPointCloud(m_cloud, *cloud, transformRotation); l = TransformLandmarks(l, transformRotation); Eigen::Vector3f xAxis; xAxis << 1, 0, 0; Eigen::Vector3f eyesLine = l[1].point.getVector3fMap() - l[0].point.getVector3fMap(); rotation.setFromTwoVectors(eyesLine, xAxis); transformRotation = Eigen::Matrix4f::Identity(); transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix(); pcl::transformPointCloud(*cloud, *cloud, transformRotation); l = TransformLandmarks(l, transformRotation); m_featureCalculator->SetLandmarks(l); cv::Mat features = m_featureCalculator->GetFeatures(cloud); cv::Mat unrolled = features.reshape(0, 120); unrolled.convertTo(unrolled, CV_64FC1); cv::Mat frame = fer::RV_preprocessDepthFrame(unrolled); std::vector bgPerc(m_ferParameters.noPatches); bgPerc.assign(m_ferParameters.noPatches, 0); std::vector featData; fer::RV_featExtractionSingleFrame(frame, m_ferParameters, &bgPerc, featData); std::vector weights(featData.size()); weights.assign(featData.size(), 1.0 / featData.size()); std::vector finalProbs(m_numberOfClasses); finalProbs.assign(m_numberOfClasses, 0); for (int i = 0; i < featData.size(); i++) { cv::Mat localData = featData[i]; std::vector localForest = m_forests[i]; if (bgPerc[i] < 0.5) { std::vector probs = RV_testForest(localData, localForest, m_ferParameters, m_numberOfClasses); for (int k = 0; k < m_numberOfClasses; k++) { finalProbs[k] += probs[k] * weights[i]; } } } m_ferResult = finalProbs; m_dataReady = true; m_workerBusy = false; } } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/FacialExpressionsProcessor.h ================================================ #pragma once #ifndef FACIALEXPRESSIONSPROCESSOR_H #define FACIALEXPRESSIONSPROCESSOR_H #include #include #include #include #include #include namespace hpe { class FacialExpressionProcessor: public IProcessor { public: typedef boost::signals2::signal)> FacialExressionReadySignal; FacialExpressionProcessor(); FacialExpressionProcessor(std::string dataFolder); void Process(IDataStorage::Ptr dataStorage); void Init(); void SubscribeFacialExpressionReadySignal(const FacialExressionReadySignal::slot_type &slot) { m_facialExpressionReady.connect(slot); } private: typedef pcl::PointXYZRGBNormal PointT; typedef pcl::PointXYZRGBNormal NormalT; void Init(std::string); std::shared_ptr CreateCylindricalSampler(); void ThreadRoutine(); fer::paramList m_ferParameters; std::vector> m_forests; std::shared_ptr> m_featureCalculator; bool m_workerBusy; bool m_dataReady; std::vector m_ferResult; Common::Landmarks m_landmarks; pcl::PointCloud m_cloud; int m_numberOfClasses; FacialExressionReadySignal m_facialExpressionReady; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/FilterProcessor.cpp ================================================ #include #include #include namespace hpe { FilterProcessor::FilterProcessor(FilterPtr filter, std::string inputKey, std::string outputKey) : m_filter(filter), m_inputKey(inputKey), m_outputKey(outputKey) { } void FilterProcessor::Process(IDataStorage::Ptr dataStorage) { CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCastNotNull(m_inputKey); CloudXYZRGBA::Ptr filteredCloudObject(new CloudXYZRGBA); filteredCloudObject->cloud = m_filter->Filter(cloudObject->cloud); dataStorage->Set(m_outputKey, filteredCloudObject); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/FilterProcessor.h ================================================ #pragma once #ifndef FILTERPROCESSOR_H #define FILTERPROCESSOR_H #include #include namespace hpe { class FilterProcessor : public IProcessor { public: typedef std::shared_ptr> FilterPtr; FilterProcessor(FilterPtr filter, std::string inputKey, std::string outputKey); void Process(IDataStorage::Ptr dataStorage); private: FilterPtr m_filter; std::string m_inputKey; std::string m_outputKey; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/HeadExtractionProcessor.cpp ================================================ #include "HeadExtractionProcessor.h" #include #include #include #include #include #include namespace hpe { HeadExtractionProcessor::HeadExtractionProcessor(void) : m_cloudKey("Cloud"), m_filteredCloudKey("FilteredCloud"), m_readyToExtract(false) { } HeadExtractionProcessor::~HeadExtractionProcessor(void) { } void HeadExtractionProcessor::Process(IDataStorage::Ptr storage) { CloudXYZRGBA::Ptr cloudObject = storage->GetAndCastNotNull(m_cloudKey); HeadPoseInformation::Ptr headPoseInfo = storage->GetAndCast("HeadPoseInfo"); if (m_readyToExtract == false) { if (headPoseInfo.get() != nullptr) { std::shared_ptr> boxFilter(new BoxFilter( headPoseInfo->WorldX + 0.075f, headPoseInfo->WorldX - 0.075f, headPoseInfo->WorldY + 0.1f, headPoseInfo->WorldY - 0.106f, -1e6, 1e6)); m_filteringQueue.AddFilter(boxFilter); } else { Common::Landmarks landmarks; PointPicker picker; picker.SetCloud(cloudObject->cloud); landmarks = picker.Pick("Selecte face boundaries", 4, "Select the four face boundaries in the following order:\n" "\t1. Leftmost point\n." "\t2. Rightmost point\n." "\t3. Topmost point\n." "\t4. Bottommost point (chin)\n" "NOTE: Make sure you see the front part of the face. Try rotating the view."); float delta = 0.03f; std::shared_ptr> boxFilter(new BoxFilter( landmarks[0].point.x + delta, landmarks[1].point.x - delta, landmarks[2].point.y + delta, landmarks[3].point.y - delta, -1e6, 1e6)); m_filteringQueue.AddFilter(boxFilter); } m_filteringQueue.AddFilterFunctor([](TCloud::Ptr & cloud) -> TCloud::Ptr { return Voxelize(cloud, 0.004f); }); m_readyToExtract = true; } CloudXYZRGBA::Ptr filteredCloudObject(new CloudXYZRGBA); filteredCloudObject->cloud = m_filteringQueue.Filter(cloudObject->cloud); storage->Set(m_filteredCloudKey, filteredCloudObject); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/HeadExtractionProcessor.h ================================================ #pragma once #ifndef HEADEXTRACTIONPROCESSOR_H #define HEADEXTRACTIONPROCESSOR_H #include //#include #include #include #include namespace hpe { class HeadExtractionProcessor : public IProcessor { public: HeadExtractionProcessor(void); ~HeadExtractionProcessor(void); void Process(IDataStorage::Ptr storage) override; private: typedef pcl::PointXYZRGBA TPoint; typedef pcl::PointCloud TCloud; std::string m_cloudKey; std::string m_filteredCloudKey; bool m_readyToExtract; FilteringQueue m_filteringQueue; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/IProcessor.h ================================================ #pragma once #ifndef IPROCESSOR_H #define IPROCESSOR_H #include #include #include namespace hpe { class IProcessor : public IInterface { public: typedef std::shared_ptr Ptr; virtual void Process(IDataStorage::Ptr dataStorage) = 0; virtual void Init() {}; virtual void Cleanup() {}; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/SaveCloudProcessor.cpp ================================================ #include "SaveCloudProcessor.h" #include #include #include #include #include #include #include #include namespace hpe { SaveCloudProcessor::SaveCloudProcessor(void) : m_frameNumber(0), m_dir("clouds"), m_running(true) { } SaveCloudProcessor::~SaveCloudProcessor(void) { m_videoWriter->release(); } void SaveCloudProcessor::Init() { m_writingThread = boost::thread(boost::bind(&SaveCloudProcessor::SavingThreadRoutine, this)); namespace pt = boost::posix_time; m_dir = pt::to_iso_string(pt::second_clock::local_time()); boost::filesystem::create_directory(m_dir); } void SaveCloudProcessor::ProcessEntry(Entry entry) { if (m_videoWriter.get() == nullptr) { m_videoWriter = std::shared_ptr(new cv::VideoWriter); m_videoWriter->open((boost::format("%1%/video.mpg") % m_dir).str(), CV_FOURCC('M', 'P', 'E', 'G'), 30, cv::Size(640, 480)); } std::vector channels; cv::split(entry.frame, channels); channels.erase(channels.begin() + 3); cv::Mat colorWithoutAlpha; cv::merge(channels, colorWithoutAlpha); std::vector indices; pcl::removeNaNFromPointCloud(*(entry.cloud), *(entry.cloud), indices); std::string savePathCloud = (boost::format("%1%/%2%.pcd") % m_dir % m_frameNumber).str(); m_frameNumber += 1; m_videoWriter->write(colorWithoutAlpha); pcl::io::savePCDFileBinary(savePathCloud, *(entry.cloud)); } void SaveCloudProcessor::SavingThreadRoutine() { while (m_running) { Entry entry; m_mutex.lock(); if (m_entries.size() != 0) { entry = m_entries.front(); m_entries.pop(); m_mutex.unlock(); ProcessEntry(entry); } else { m_mutex.unlock(); boost::this_thread::sleep(boost::posix_time::milliseconds(100)); } } m_mutex.lock(); while (m_entries.size() != 0) { ProcessEntry(m_entries.front()); m_entries.pop(); } m_videoWriter->release(); m_mutex.unlock(); } void SaveCloudProcessor::Cleanup() { m_running = false; m_writingThread.join(); } void hpe::SaveCloudProcessor::Process(IDataStorage::Ptr dataStorage) { CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast("Cloud"); RawFrames::Ptr frames = dataStorage->GetAndCast("RawFrames"); if (cloudObject.get() == nullptr || frames.get() == nullptr) { return; } Entry entry; frames->colorFrame.copyTo(entry.frame); pcl::copyPointCloud(*(cloudObject->cloud), *(entry.cloud)); m_mutex.lock(); m_entries.push(entry); m_mutex.unlock(); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/SaveCloudProcessor.h ================================================ #pragma once #ifndef SAVECLOUDPROCESSOR_H #define SAVECLOUDPROCESSOR_H #include #include #include #include #include #include #include namespace hpe { class SaveCloudProcessor : public IProcessor { public: SaveCloudProcessor(void); ~SaveCloudProcessor(void); void Process(IDataStorage::Ptr dataStorage); void Init(); void Cleanup(); private: struct Entry { Entry() : cloud(new pcl::PointCloud) {} cv::Mat frame; pcl::PointCloud::Ptr cloud; }; int m_frameNumber; std::string m_dir; std::shared_ptr m_videoWriter; boost::mutex m_mutex; std::queue m_entries; boost::thread m_writingThread; bool m_running; void SavingThreadRoutine(); void ProcessEntry(Entry entry); }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/ShowCloudProcessor.cpp ================================================ #include "ShowCloudProcessor.h" #include #include namespace hpe { ShowCloudProcessor::ShowCloudProcessor(void) : m_key("Cloud"), m_visualizer("Cloud"), m_first(true) { } ShowCloudProcessor::ShowCloudProcessor(std::string key) : m_key(key), m_visualizer(key), m_first(true) { } ShowCloudProcessor::~ShowCloudProcessor(void) { } void ShowCloudProcessor::Process(IDataStorage::Ptr storage) { IDataObject::Ptr obj = storage->Get(m_key); CloudXYZRGBA::Ptr cloudObject = std::dynamic_pointer_cast(obj); if (cloudObject.get() == nullptr) { return; } if (m_first) { m_visualizer.registerKeyboardCallback(&ShowCloudProcessor::KeyboardEventCallback, this); m_visualizer.addPointCloud(cloudObject->cloud, m_key); m_first = false; } else { m_visualizer.updatePointCloud(cloudObject->cloud, m_key); } m_visualizer.spinOnce(); } void ShowCloudProcessor::KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender) { ShowCloudProcessor *_this = static_cast(sender); _this->KeyPressed(event); } void ShowCloudProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event) { } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/ShowCloudProcessor.h ================================================ #pragma once #ifndef SHOWCLOUDPROCESSOR_H #define SHOWCLOUDPROCESSOR_H #include #include #include namespace hpe { class ShowCloudProcessor : public IProcessor { public: ShowCloudProcessor(void); ShowCloudProcessor(std::string key); ~ShowCloudProcessor(void); void Process(IDataStorage::Ptr storage); protected: virtual void KeyPressed(const pcl::visualization::KeyboardEvent &event); private: static void KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender); std::string m_key; pcl::visualization::PCLVisualizer m_visualizer; bool m_first; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/TemplateCreatorProcessor.cpp ================================================ #include "TemplateCreatorProcessor.h" #include namespace hpe { TemplateCreatorProcessor::TemplateCreatorProcessor(void) : m_cloudKey("FilteredCloud"), m_templateKey("Template") { } TemplateCreatorProcessor::~TemplateCreatorProcessor(void) { } void TemplateCreatorProcessor::Init() { } void TemplateCreatorProcessor::Process(IDataStorage::Ptr dataStorage) { CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCast(m_cloudKey); if (cloudObject.get() != nullptr) { CloudXYZRGBA::Ptr templateObject(new CloudXYZRGBA); templateObject->cloud = m_templateCreator.AddCloudToTemplate(cloudObject->cloud); dataStorage->Set(m_templateKey, templateObject); } } IncrementalHeadTemplateCreator::CloudType::Ptr TemplateCreatorProcessor::GetTemplate() { return m_templateCreator.GetTemplate(); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/TemplateCreatorProcessor.h ================================================ #pragma once #ifndef TEMPLATECREATORPROCESSOR_H #define TEMPLATECREATORPROCESSOR_H #include #include namespace hpe { /** \class TemplateCreatorProcessor \brief A head template is created here. The process uses m_cloudKey as an input, and iteratively registers consequtive point clouds by using IncrementalHeadTemplateCreator. \author Sergey \date 8/11/2015 */ class TemplateCreatorProcessor : public IProcessor { public: TemplateCreatorProcessor(void); ~TemplateCreatorProcessor(void); void Init() override; void Process(IDataStorage::Ptr dataStorage) override; IncrementalHeadTemplateCreator::CloudType::Ptr GetTemplate(); private: std::string m_cloudKey; std::string m_templateKey; IncrementalHeadTemplateCreator m_templateCreator; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/TrackingProcessor.cpp ================================================ #include "TrackingProcessor.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include namespace hpe { TrackingProcessor::TrackingProcessor(Cloud::Ptr templateCloud) : m_template(new Cloud), m_first(true), m_leftEyeIdx(-1), m_rightEyeIdx(-1), m_noseIdx(-1), m_saveLandmarks(false) { pcl::copyPointCloud(*templateCloud, *m_template); m_template = Voxelize(m_template, 0.0075); CloudMapper::DynamicICPParams dynamicIcpParams; dynamicIcpParams.push_back(CloudMapper::ICPParams(0.007, 0.005, 10e-6, 400, 10)); m_mapper.SetDynamicICPParams(dynamicIcpParams); m_mapper.SetUsePointToPlaneMetric(true); m_tracker = std::shared_ptr>(new ICPTemplateTracker(m_template)); } TrackingProcessor::~TrackingProcessor(void) { } void TrackingProcessor::Process(IDataStorage::Ptr storage) { CloudXYZRGBA::Ptr cloudObject = storage->GetAndCastNotNull("Cloud", "TrackingProcessor::Process - Cloud is null"); CloudXYZRGBA::Ptr updatedTemplateObject(new CloudXYZRGBA); updatedTemplateObject->cloud = Cloud::Ptr(new Cloud); HeadPoseInformation::Ptr info = storage->GetAndCast("HeadPoseInfo"); PCDGrabber::CloudFileInfo::Ptr fileinfo = storage->GetAndCast("FileInfo"); if (m_first) { if (m_templateLandmarks.size() != 3) { m_templateLandmarks = GetLandmarks(m_template); } m_leftEyeIdx = m_templateLandmarks[1].index; m_rightEyeIdx = m_templateLandmarks[0].index; m_noseIdx = m_templateLandmarks[2].index; if (info.get() == nullptr) { Common::Landmarks frameLandmarks; if (fileinfo.get() != nullptr) { std::string landmarksFile = boost::replace_all_copy(fileinfo->Filename, ".pcd", ".bnd"); if (boost::filesystem::exists(landmarksFile)) { frameLandmarks = LoadLandmarks(landmarksFile); } } if (frameLandmarks.size() != 3) { frameLandmarks = GetLandmarks(cloudObject->cloud); std::string landmarksFile = boost::replace_all_copy(fileinfo->Filename, ".pcd", ".bnd"); SaveLandmarks(frameLandmarks, landmarksFile); } Eigen::Matrix4f transform = m_mapper.GetTransformHavingLandmarks(m_template, m_templateLandmarks, cloudObject->cloud, frameLandmarks); pcl::transformPointCloud(*m_template, *(updatedTemplateObject->cloud), transform); m_tracker->RoughFirstTimeDetection(cloudObject->cloud, transform); } else { Eigen::Vector3f zVector = GetNormal(m_template, m_noseIdx); if (zVector(2) < 0) { zVector = -zVector; } RawFrames::Ptr frames = storage->GetAndCastNotNull("RawFrames", "TrackingProcessor::Process - RawFrames is null"); KinectDataConverter converter; float depthValue = 0; int squareSize = 0; while (depthValue == 0) { cv::Mat square = frames->depthFrame .rowRange(info->DepthFrameY - squareSize, info->DepthFrameY + squareSize + 1) .colRange(info->DepthFrameX - squareSize, info->DepthFrameX + squareSize + 1); double min, max; cv::minMaxIdx(square, &min, &max); depthValue = max; squareSize += 1; } pcl::PointXYZ pt = converter.ConvertOnePoint(info->DepthFrameX, info->DepthFrameY, depthValue, cv::Size(frames->depthFrame.cols, frames->depthFrame.rows)); Eigen::Vector3f orientation = pt.getVector3fMap(); orientation /= orientation.norm(); Eigen::Quaternionf rotation; rotation.setFromTwoVectors(zVector, orientation); Eigen::Matrix4f transformRotation = Eigen::Matrix4f::Identity(); transformRotation.block<3, 3>(0, 0) = rotation.toRotationMatrix(); pcl::transformPointCloud(*m_template, *(updatedTemplateObject->cloud), transformRotation); pcl::PointXYZRGBA nose = updatedTemplateObject->cloud->points[m_templateLandmarks[2].index]; Eigen::Matrix4f transformTranslation = Eigen::Matrix4f::Identity(); transformTranslation.block<3, 1>(0, 3) = pt.getVector3fMap() - nose.getVector3fMap(); pcl::transformPointCloud(*(updatedTemplateObject->cloud), *(updatedTemplateObject->cloud), transformTranslation); Eigen::Matrix4f transform = transformRotation * transformTranslation; Eigen::Matrix4f identity = Eigen::Matrix4f::Identity(); m_tracker->SetTemplate(updatedTemplateObject->cloud); m_tracker->RoughFirstTimeDetection(cloudObject->cloud, identity); updatedTemplateObject->cloud = m_tracker->Update(cloudObject->cloud); } m_first = false; } else { updatedTemplateObject->cloud = m_tracker->Update(cloudObject->cloud); } Common::Landmarks landmarks = ResampleLandmarks(m_templateLandmarks, updatedTemplateObject->cloud); //landmarks.push_back(Common::Landmark(m_leftEyeIdx, updatedTemplateObject->cloud->points[m_leftEyeIdx])); //landmarks.push_back(Common::Landmark(m_rightEyeIdx, updatedTemplateObject->cloud->points[m_rightEyeIdx])); //landmarks.push_back(Common::Landmark(m_noseIdx, updatedTemplateObject->cloud->points[m_noseIdx])); PointType pointOnNormal = updatedTemplateObject->cloud->points[m_noseIdx]; pointOnNormal.getArray3fMap() = pointOnNormal.getVector3fMap() + GetNormal(updatedTemplateObject->cloud, m_noseIdx); landmarks.push_back(Common::Landmark(-1, pointOnNormal)); if (fileinfo.get() != nullptr) { if (m_saveLandmarks) { std::string eyespath = boost::replace_all_copy(fileinfo->Filename, ".pcd", ".bnd"); std::cout << eyespath << std::endl; SaveLandmarks(landmarks, eyespath); } } LandmarksObject::Ptr landmarksObject(new LandmarksObject); landmarksObject->landmarks = landmarks; storage->Set("Landmarks", landmarksObject); storage->Set("UpdatedTemplate", updatedTemplateObject); Eigen::Vector3f headPose = VectorToEulerAngles(GetNormal(updatedTemplateObject->cloud, m_noseIdx)); m_headPoseReadySignal(headPose); } Common::Landmarks TrackingProcessor::GetLandmarks(Cloud::Ptr cloud) { PointPicker picker; picker.SetCloud(cloud); return picker.Pick("3 points", 3); } void TrackingProcessor::SetTemplateLandmarksFromFile(std::string landmarksFile) { Common::Landmarks l; if (boost::filesystem::exists(landmarksFile)) { l = LoadLandmarks(landmarksFile); } else { PointPicker picker; picker.SetCloud(m_template); l = picker.Pick("eyes and nose", 3); SaveLandmarks(l, landmarksFile); } m_templateLandmarks = l; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Processor/TrackingProcessor.h ================================================ #pragma once #ifndef TRACKINGPROCESSOR_H #define TRACKINGPROCESSOR_H #include #include #include #include #include #include #include #include namespace hpe { class TrackingProcessor : public IProcessor { public: typedef pcl::PointXYZRGBA PointType; typedef pcl::PointCloud Cloud; TrackingProcessor(Cloud::Ptr templateCloud); ~TrackingProcessor(void); void Process(IDataStorage::Ptr storage); typedef boost::signals2::signal HeadPoseReadySignal; void SubscribeHeadposeReadySignal(const HeadPoseReadySignal::slot_type &slot) { m_headPoseReadySignal.connect(slot); } void SetTemplateLandmarks(Common::Landmarks &landmarks) { m_templateLandmarks = ResampleLandmarksByPosition(landmarks, m_template); } void SetSaveLandmakrs(bool value) { m_saveLandmarks = value; } void ResetTracker() { m_tracker->ResetTracking(); m_first = true; } void SetTemplateLandmarksFromFile(std::string landmarksFile); private: Common::Landmarks GetLandmarks(Cloud::Ptr cloud); int m_leftEyeIdx; int m_rightEyeIdx; int m_noseIdx; bool m_first; Cloud::Ptr m_template; CloudMapper m_mapper; std::shared_ptr> m_tracker; Common::Landmarks m_templateLandmarks; bool m_saveLandmarks; HeadPoseReadySignal m_headPoseReadySignal; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Provider/IDataProvider.h ================================================ #ifndef IDATAPROVIDER_H #define IDATAPROVIDER_H #include #include #include #include namespace hpe { /** \class IDataProvider \brief An interface that provides access to RGB-D pairs, point clouds, and landmarks if available. Can be used to imitate grabbing from a sensor. \author Sergey \date 8/11/2015 \tparam PointType Type of the point type. */ template class IDataProvider : public IInterface { public: virtual int GetFrameCount() = 0; virtual cv::Mat GetDepth(int frameNumber) = 0; virtual cv::Mat GetBgr(int frameNumber) = 0; virtual typename pcl::PointCloud::Ptr GetCloud(int frameNumber) = 0; virtual typename Common::Landmarks GetPoints(int frameNumber) = 0; }; } #endif // IDATAPROVIDER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Provider/RawDataProvider.cpp ================================================ #include "RawDataProvider.h" #include #include #include #include namespace fs = boost::filesystem; namespace hpe { RawDataProvider::RawDataProvider() : m_initialized(false) { } cv::Mat RawDataProvider::GetDepth(int frameNumber) { CvMatRawSerializer reader; cv::Mat depthImage; reader.Load(m_depthImageFiles[frameNumber], depthImage); return depthImage; } cv::Mat RawDataProvider::GetBgr(int frameNumber) { return cv::imread(m_imageFiles[frameNumber]); } pcl::PointCloud::Ptr RawDataProvider::GetCloud(int frameNumber) { CvMatRawSerializer reader; cv::Mat depthImage; reader.Load(m_depthImageFiles[frameNumber], depthImage); cv::Mat image = cv::imread(m_imageFiles[frameNumber]); KinectDataConverter converter; return converter.DepthRGBToPointCloud(depthImage, image); } RawDataProvider::Landmarks RawDataProvider::GetPoints(int frameNumber) { RawDataProvider::Landmarks landmarks; if (frameNumber < m_pointsFiles.size() || m_pointsFiles.empty()) { return landmarks; } std::string landmarkFile = m_pointsFiles[frameNumber]; std::ifstream stream(landmarkFile.c_str()); std::string line; while (std::getline(stream, line)) { boost::tokenizer> tokenizer(line); std::vector tokens; tokens.assign(tokenizer.begin(), tokenizer.end()); RawDataProvider::Landmarks::value_type landmark; std::stringstream sstream(tokens[0]); int x, y; sstream >> x; sstream.str(tokens[1]); sstream.clear(); sstream >> y; auto cloud = GetCloud(frameNumber); landmark.index = (int)x + cloud->width * (int)y; landmark.point = cloud->at(landmark.index); landmarks.push_back(landmark); } return landmarks; } int RawDataProvider::GetFrameCount() { int m1 = std::min(m_depthImageFiles.size(), m_imageFiles.size()); return m1; return std::min(m1, m_numberOfFrames); } int RawDataProvider::GetFrequency() { return m_frequency; } void RawDataProvider::CheckInitialized() { if (m_initialized == false) { } } void RawDataProvider::SetFolder(const std::string &path) { m_pointCloudFolder = path; if (fs::exists(m_pointCloudFolder) == false) { throw std::invalid_argument("DataProvider::SetFolder: pointCloudFolder does not exist"); } std::string videoPath = (fs::path(m_pointCloudFolder) / "video.mpg").string(); if (fs::exists(videoPath) == false) { //throw std::invalid_argument("DataProvider::SetFolder: video does not exist"); } m_videoCapture = std::shared_ptr(new cv::VideoCapture); m_videoCapture->open(videoPath); EnumerateFiles(); m_initialized = true; } void RawDataProvider::EnumerateFiles() { fs::path rootFolder(m_pointCloudFolder); if (fs::exists(rootFolder)) { if (fs::is_directory(rootFolder)) { auto it = fs::directory_iterator(rootFolder); for (; it != fs::directory_iterator(); it++) { auto fileName = *it; if (fs::is_regular(fileName)) { auto path = fileName.path(); fs::path extension = path.extension(); auto filename = path.filename().string(); if (filename == "shimmeroutput.csv") { continue; } std::string p = path.string(); if (extension == ".png") { m_imageFiles.push_back(p); } else if (extension == ".raw") { m_depthImageFiles.push_back(p); } else if (extension == ".csv") { m_pointsFiles.push_back(p); } } } } } std::sort(m_imageFiles.begin(), m_imageFiles.end()); std::sort(m_depthImageFiles.begin(), m_depthImageFiles.end()); std::sort(m_pointsFiles.begin(), m_pointsFiles.end()); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Provider/RawDataProvider.h ================================================ #ifndef RAWDATAPROVIDER_H #define RAWDATAPROVIDER_H #include #include #include #include #include "Provider/IDataProvider.h" #include "Common.h" namespace hpe { class RawDataProvider : public IDataProvider { public: typedef Common::Landmarks Landmarks; RawDataProvider(); void SetFolder(const std::string &path); cv::Mat GetDepth(int frameNumber); cv::Mat GetBgr(int frameNumber); pcl::PointCloud::Ptr GetCloud(int frameNumber); Landmarks GetPoints(int frameNumber); int GetFrameCount(); int GetFrequency(); private: void CheckInitialized(); void EnumerateFiles(); std::vector m_imageFiles; std::vector m_depthImageFiles; std::vector m_pointsFiles; std::string m_pointCloudFolder; int m_numberOfFrames; int m_frequency; bool m_initialized; std::shared_ptr m_videoCapture; }; } #endif // DATAPROVIDER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Settings/HPESettings.cpp ================================================ #include "HPESettings.h" #include #include #include #include namespace hpe { HPESettings::HPESettings(void) { Load("settings.ini"); } HPESettings::HPESettings(std::string file) { Load(file); } HPESettings::HPESettings(bool load, std::string file) { if (load) { Load(file); } } HPESettings::~HPESettings(void) { } void HPESettings::Load(std::string file) { if (boost::filesystem::exists(file)) { boost::property_tree::ini_parser::read_ini(file, m_propertyTree); } else { throw HPEException("Settings: file not found"); } } std::string HPESettings::GetString(std::string key) { auto child = m_propertyTree.get_child_optional(key); if (!child) { std::string message = (boost::format("Settings: key %1% not found") % key).str(); throw HPEException(message); } return m_propertyTree.get(key); } std::vector HPESettings::ReadList(std::string listFile) { if (boost::filesystem::exists(listFile) == false) { throw HPEException("std::vector HPESettings::ReadList(std::string listFile) : listFile doesn't exist"); } std::vector result; std::ifstream stream(listFile); std::string line; while (std::getline(stream, line)) { result.push_back(line); } return result; } std::vector HPESettings::ReadStringList(std::string key) { return ReadList(GetString(key)); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Settings/HPESettings.h ================================================ #pragma once #ifndef HPESETTINGS_H #define HPESETTINGS_H #include #include #include #include #include namespace hpe { /** \class HPESettings \brief This class is used to read *.ini files. */ class HPESettings { public: HPESettings(void); HPESettings(std::string file); HPESettings(bool load, std::string file = "settings.ini"); ~HPESettings(void); void Load(std::string file); std::string GetString(std::string key); /** \fn template T HPESettings::GetValue(std::string key) \brief Return a value from ini file and convert it to T. \author Sergey \date 8/11/2015 \tparam T type of the returned value. \param key The key in the ini file. \return The value. */ template T GetValue(std::string key) { std::string value = GetString(key); return boost::lexical_cast(value); } /** \fn template std::vector HPESettings::GetVector(std::string key) \brief Similar to GetValue(...), but to return a vector of T. In an ini an int vector should be stored as 1,2,3,4,5,6 (comma separated) \tparam T All vector values will be cast to T. \param key The key. \return The vector of T. */ template std::vector GetVector(std::string key) { std::string value = GetString(key); std::vector strs; boost::split(strs, value, boost::is_any_of(",")); std::vector result; for (int i = 0; i < strs.size(); i++) { try { T v = boost::lexical_cast(strs[i]); result.push_back(v); } catch (boost::bad_lexical_cast &ex) { std::cout << ex.what() << std::endl; } } return result; } static std::vector ReadList(std::string listFile); std::vector ReadStringList(std::string key); private: boost::property_tree::ptree m_propertyTree; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/Tracker/ICPTemplateTracker.h ================================================ #ifndef ICPTEMPLATETRACKER_H #define ICPTEMPLATETRACKER_H #include "Common.h" #include "Align/CloudMapper.h" #include #include #include #include #include #include #include #include #include namespace hpe { //FIXME Now works only with pcl::PointXYZRGBA because CloudMapper is not a template template class ICPTemplateTracker { public: typedef pcl::PointCloud TCloud; ICPTemplateTracker(typename TCloud::Ptr &templateCloud) : m_templateCloud(templateCloud), m_canPredict(false), m_transformationFilter(nullptr) { m_mapper = std::shared_ptr(new CloudMapper()); m_mapper->SetUseNormalShooting(false); m_mapper->SetUsePointToPlaneMetric(true); m_icpParams.push_back(CloudMapper::ICPParams(0.005, 0.005, 10e-6, 2000, 5)); m_mapper->SetDynamicICPParams(m_icpParams); m_mapper->SetUsePointToPlaneMetric(true); m_followingTemplate = typename TCloud::Ptr(new TCloud); } void RoughFirstTimeDetection(typename TCloud::Ptr &frame, Eigen::Matrix4f &transformation) { pcl::transformPointCloud(*m_templateCloud, *m_followingTemplate, transformation); m_canPredict = true; } void SetTemplate(typename TCloud::Ptr &templateCloud) { pcl::copyPointCloud(*templateCloud, *m_templateCloud); m_canPredict = false; } void GetRelevantPoints(typename TCloud::Ptr &newFrame, std::vector &points, float distance) { pcl::search::KdTree search; search.setInputCloud(newFrame); for (int i = 0; i < m_followingTemplate->size() ; i += 1) { static const int neighbours = 1; std::vector pointIdx(neighbours); std::vector pointDistance(neighbours); if (search.nearestKSearch(m_followingTemplate->points[i], neighbours, pointIdx, pointDistance)) { if (pointDistance[0] < distance) { points.push_back(i); } } } } typename TCloud::Ptr Update(typename TCloud::Ptr &newFrame) { m_followingTransform = m_mapper->GetTransformForTwoCloudsDynamically(m_followingTemplate, newFrame, m_icpParams); if (m_transformationFilter != nullptr) { m_followingTransform = m_transformationFilter->Filter(m_followingTransform); } typename TCloud::Ptr updatedCloud(new TCloud); pcl::transformPointCloud(*m_followingTemplate, *updatedCloud, m_followingTransform); pcl::copyPointCloud(*updatedCloud, *m_followingTemplate); std::vector pointIndexes; GetRelevantPoints(newFrame, pointIndexes, 5e-6); if (m_pointHistory.size() >= HistorySize) { m_pointHistory.pop_front(); } m_pointHistory.push_back(pointIndexes); std::vector weights = WeightsFromPointHistory(m_pointHistory, m_followingTemplate->size()); m_mapper->SetWeights(weights); return updatedCloud; } Eigen::Matrix4f GetCurrentTransform() { return m_followingTransform; } void SetTransformationFilter(std::shared_ptr> &dataFilter) { m_transformationFilter(dataFilter); } bool CanPredict() { return m_canPredict; } void ResetTracking() { m_canPredict = false; } private: std::vector WeightsUniform(int totalPoints, double weight = 1) { std::vector weights; weights.reserve(totalPoints); for (int i = 0; i < totalPoints; i += 1) { weights[i] = weight; } return weights; } std::vector WeightsFromPointHistory(PointHistory &history, int totalPoints) { std::vector weights(totalPoints); for (auto indexes = history.begin(); indexes != history.end(); indexes++) { for (auto index = indexes->begin(); index != indexes->end(); index++) { weights[*index] += 1; } } for (int i = 0; i < weights.size(); i += 1) { weights[i] = 0.3 + 0.4 * weights[i]; } return weights; } std::vector WeightsGaussian(typename TCloud::Ptr &cloud) { TPoint &noseTip = cloud->points[0]; std::vector weights(cloud->points.size()); const double sigma = 0.1; for (int i = 0; i < cloud->points.size(); i += 1) { TPoint &pt = cloud->points[i]; double dist = (pt.x - noseTip.x) * (pt.x - noseTip.x) + (pt.y - noseTip.y) * (pt.y - noseTip.y) + (pt.z - noseTip.z) * (pt.z - noseTip.z); weights[i] = 0.3 + std::exp(- dist / 2 / sigma / sigma); } return weights; } std::vector WeightsGaussianAndHistory(PointHistory &history, typename TCloud::Ptr &cloud) { std::vector weightsFromHistory = WeightsFromPointHistory(history, cloud->size()); std::vector weightsFromGaussian = WeightsGaussian(cloud); std::vector weights(cloud->size()); for (int i = 0; i < cloud->size(); i += 1) { weights[i] = weightsFromGaussian[i] + weightsFromHistory[i]; } return weights; } bool m_canPredict; static const int HistorySize = 5; typename TCloud::Ptr m_templateCloud; typename TCloud::Ptr m_followingTemplate; std::shared_ptr m_mapper; PointHistory m_pointHistory; Eigen::Matrix4f m_followingTransform; CloudMapper::DynamicICPParams m_icpParams; std::shared_ptr> m_transformationFilter; }; } #endif // ICPTEMPLATETRACKER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/UI/PointPicker.h ================================================ #pragma once #ifndef POINTPICKER_H #define POINTPICKER_H #include #include #include #include #include #include #include #include #include #include namespace hpe { /** \class PointPicker \brief A component that shows a window asking to manually pick a set of points. To pick a point press SHIFT key and left click on the point, the point index will be displayed. To delete the last point press DELETE. When done press Space \author Sergey \date 8/11/2015 \tparam PointType Type of the point type. */ template class PointPicker { public: typedef typename Common::Landmark PickedPoint; typedef typename Common::Landmarks PickedPoints; PointPicker(bool showColor = true, bool allowLessKeypoints = false) : m_cloudIsSet(false), m_done(false), m_sphereNumber(0), m_warningId("warning text"), m_pointsListId("m_pointsListId"), m_selectedPointsCloudId("m_selectedPointsCloudId"), m_addedPointList(false), m_addedCloud(false), m_showColor(showColor), m_allowLessKeypoints(allowLessKeypoints) { } PickedPoints Pick(std::string message, int numberOfPoints, std::string extendedMessage="") { if (m_cloudIsSet == false) { throw HPEException("void Pick(std::string &message, int numberOfPoints): Cloud was not set"); } m_numberOfPointsToPick = numberOfPoints; pcl::visualization::PCLVisualizer viewer(message); m_visualizer = &viewer; viewer.registerPointPickingCallback(&PointPicker::PointPickingCallback, this); viewer.registerKeyboardCallback(&PointPicker::KeyboardCallback, this); viewer.addPointCloud(m_cloud, "cloud"); if (m_showColor == false) { viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 1, "cloud"); } viewer.addText("Press:\n SHIFT + Left Click to select\n DELETE to clear\n BACKSPACE to delete last point\n SPACE to finish", 10, 50, 11, 1, 1, 1, "instructions"); viewer.addText(extendedMessage, 200, 50, 11, 1, 1, 1, "extendedMessage"); while (viewer.wasStopped() == false && m_done == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } std::vector cameras; viewer.getCameras(cameras); if (cameras.size() > 0) { m_camera = std::shared_ptr(new pcl::visualization::Camera); *m_camera = cameras[0]; } m_visualizer->close(); return m_pickedPoints; } void SetCloud(const typename pcl::PointCloud::Ptr &cloud) { m_cloud = cloud; m_search.setInputCloud(cloud); m_cloudIsSet = true; m_distanceBetweenPoints = std::abs(cloud->at(0).y - cloud->at(1).y); ClearPicker(); } std::shared_ptr GetCamera() { return m_camera; } private: std::shared_ptr m_camera; bool m_showColor; bool m_cloudIsSet; bool m_done; bool m_addedCloud; bool m_addedPointList; bool m_allowLessKeypoints; int m_numberOfPointsToPick; float m_distanceBetweenPoints; std::string m_warningId; std::string m_pointsListId; std::string m_selectedPointsCloudId; pcl::visualization::PCLVisualizer *m_visualizer; typename pcl::PointCloud::Ptr m_cloud; typename pcl::search::KdTree m_search; PickedPoints m_pickedPoints; int m_sphereNumber; void ShowSelectedPoints() { if (m_addedCloud) { m_visualizer->removePointCloud(m_selectedPointsCloudId); m_addedCloud = false; } if (m_pickedPoints.empty()) { return; } pcl::PointCloud::Ptr cloud(new pcl::PointCloud); for (int i = 0; i < m_pickedPoints.size(); i++) { auto point = m_pickedPoints[i].point; pcl::PointXYZRGB p; p.x = point.x; p.z = point.z; p.y = point.y; p.r = 255; p.g = 255; p.b = 255; cloud->points.push_back(p); } m_visualizer->addPointCloud(cloud, m_selectedPointsCloudId); m_visualizer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, m_selectedPointsCloudId); m_addedCloud = true; } void RemoveLastPoint() { if (m_pickedPoints.empty()) { return; } m_pickedPoints.pop_back(); } void PrintPointsList() { if (m_addedPointList) { m_visualizer->removeShape(m_pointsListId); m_addedPointList = false; } if (m_pickedPoints.empty()) { return; } std::stringstream stream; for (int i = 0; i < m_pickedPoints.size(); i++) { auto p = m_pickedPoints[i].point; stream << boost::format("%1% - %5% - [%2% %3% %4%]") % (i + 1) % p.x % p.y % p.z % m_pickedPoints[i].index << std::endl; } m_addedPointList = true; auto text = stream.str(); m_visualizer->addText(text, 10, 150, 10, 1, 1, 1, m_pointsListId); } void ClearPicker() { m_pickedPoints.clear(); m_done = false; } void ShowWarning() { std::string warning = (boost::format("You need to pick exactly %1% points") % m_numberOfPointsToPick).str(); m_visualizer->addText(warning, 10, 100, 10, 1, 0, 0, m_warningId); } bool CheckNumberOfPoints() { if (m_pickedPoints.size() >= m_numberOfPointsToPick) { ShowWarning(); return false; } return true; } bool CheckEnoughPointsSelected() { if (m_allowLessKeypoints) { return true; } if (m_pickedPoints.size() == m_numberOfPointsToPick) { return true; } ShowWarning(); return false; } void ClearWarning() { m_visualizer->removeShape(m_warningId); } static void PointPickingCallback(const pcl::visualization::PointPickingEvent &event, void *sender) { PointPicker *_this = (PointPicker *)sender; if (_this->CheckNumberOfPoints() == false) { return; } float x, y, z; event.getPoint(x, y, z); PointType p; p.x = x; p.y = y; p.z = z; std::vector foundIndices; std::vector distances; _this->m_search.nearestKSearch(p, 1, foundIndices, distances); PickedPoint pickedPoint; pickedPoint.index = foundIndices.at(0); pickedPoint.point = _this->m_cloud->at(pickedPoint.index); _this->m_pickedPoints.push_back(pickedPoint); _this->ShowSelectedPoints(); _this->PrintPointsList(); } static void KeyboardCallback(const pcl::visualization::KeyboardEvent &event, void *sender) { if (event.keyDown() == false) { return; } PointPicker *_this = (PointPicker *)sender; _this->ClearWarning(); auto keysym = event.getKeySym(); if (keysym == "space") { if (_this->CheckEnoughPointsSelected()) { _this->m_done = true; } } if (keysym == "Delete") { // _this->ClearPicker(); // _this->PrintPointsList(); // _this->ShowSelectedPoints(); } if (keysym == "BackSpace") { _this->RemoveLastPoint(); _this->PrintPointsList(); _this->ShowSelectedPoints(); } } }; } #endif // POINTPICKER_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/UI/ShowCloud.h ================================================ #ifndef SHOWCLOUD_H #define SHOWCLOUD_H #include #include #include #include #include #include #include #include #include //#include #include namespace hpe { template typename pcl::PointCloud::Ptr MoveCloudTo(typename pcl::PointCloud::Ptr &cloud, float x, float y, float z) { Eigen::Vector4d centroid; pcl::compute3DCentroid(*cloud, centroid); typename pcl::PointCloud::Ptr result(new pcl::PointCloud); Eigen::Matrix4f translation = Eigen::Matrix4f::Identity(); translation(0, 3) = (x - centroid(0)); translation(1, 3) = (y - centroid(1)); translation(2, 3) = (z - centroid(2)); pcl::demeanPointCloud(*cloud, centroid, *result); return result; } template void ShowTwoClouds(typename pcl::PointCloud::Ptr &cloud1, typename pcl::PointCloud::Ptr &cloud2, std::string caption = "Two Clouds", int waitMillisecods = 0) { pcl::visualization::PCLVisualizer viewer(caption); viewer.addPointCloud(cloud1, "1"); viewer.addPointCloud(cloud2, "2"); if (waitMillisecods == 0) { while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } else { boost::this_thread::sleep(boost::posix_time::milliseconds(waitMillisecods)); } } template std::shared_ptr ShowCloud(typename pcl::PointCloud::Ptr &cloud, std::string caption = "Cloud", std::shared_ptr camera = nullptr, double r = 0, double g = 0, double b = 0) { pcl::visualization::PCLVisualizer viewer(caption); auto t = MoveCloudTo(cloud, 0, 0, 0); viewer.addPointCloud(t, "1"); if (camera != nullptr) { viewer.setCameraParameters(*camera); } viewer.setBackgroundColor(r, g, b); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "1"); while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } std::vector cameras; viewer.getCameras(cameras); std::shared_ptr result(new pcl::visualization::Camera); *result = cameras[0]; return result; } template void ShowAllClouds(std::vector::Ptr> &clouds, std::string caption = "Clouds") { pcl::visualization::PCLVisualizer viewer(caption); auto it = clouds.begin(); int i = 0; for (; it != clouds.end(); ++it) { std::string cloudName = (boost::format("%1%") % i++).str(); viewer.addPointCloud(*it, cloudName); } while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } template void ShowTwoCloudsInDifferentColors(typename pcl::PointCloud::Ptr &cloud1, typename pcl::PointCloud::Ptr &cloud2, pcl::CorrespondencesPtr correspondences, std::string caption = "Two clouds different color", int waitMillisecods = 0) { pcl::visualization::PCLVisualizer viewer(caption); viewer.addCoordinateSystem(); viewer.addPointCloud(cloud1, "1"); viewer.addPointCloud(cloud2, "2"); if (correspondences != nullptr) { viewer.addCorrespondences(cloud1, cloud2, *correspondences); } viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "1"); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, "2"); if (waitMillisecods == 0) { while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } else { viewer.spinOnce(waitMillisecods); } } template void ShowCloudInColor(typename pcl::PointCloud::Ptr &cloud, std::shared_ptr camera = nullptr, double r = 0, double g = 0, double b = 1, double rb = 0, double gb = 0, double bb = 0, double pointSize = 3, std::string caption = "Cloud in color") { pcl::visualization::PCLVisualizer viewer(caption); viewer.addPointCloud(cloud, "1"); if (camera != nullptr) { viewer.setCameraParameters(*camera); } viewer.setBackgroundColor(rb, gb, bb); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, r, g, b, "1"); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, pointSize, "1"); while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } template void ShowTwoCloudsInDifferentColors(typename pcl::PointCloud::Ptr &cloud1, typename pcl::PointCloud::Ptr &cloud2, std::string caption = "Two clouds different color", int waitMillisecods = 0) { ShowTwoCloudsInDifferentColors(cloud1, cloud2, pcl::CorrespondencesPtr(), caption, waitMillisecods); } template void ShowCloudWithLandmarks(typename pcl::PointCloud::Ptr &cloud, typename Common::Landmarks &landmarks, std::string caption = "Landmarks", bool showLandmarksNumbers = false) { typename pcl::PointCloud::Ptr markerCloud(new pcl::PointCloud); for (auto it = landmarks.begin(); it != landmarks.end(); ++it) { markerCloud->push_back((*it).point); } pcl::visualization::PCLVisualizer viewer(caption); viewer.addPointCloud(cloud, "Main Cloud"); viewer.addPointCloud(markerCloud, "Marker Cloud"); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "Marker Cloud"); viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 4, "Marker Cloud"); if (showLandmarksNumbers) { for (int i = 0; i < markerCloud->size(); i++) { std::string text = (boost::format("%1%") % i).str(); viewer.addText3D(text, markerCloud->at(i), 2, 1, 1, 1, text); } } while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } } #endif // SHOWCLOUD_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/UI/ShowMesh.h ================================================ #pragma once #ifndef SHOWMESH_H #define SHOWMESH_H #include #include namespace hpe { void ShowMesh(pcl::PolygonMesh::Ptr &mesh, std::string caption = "Mesh") { pcl::visualization::PCLVisualizer viewer(caption); viewer.addPolygonMesh(*mesh, "mesh"); //viewer.resetCamera(); //viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, "mesh"); while (viewer.wasStopped() == false) { viewer.spinOnce(100); boost::this_thread::sleep(boost::posix_time::microseconds(100000)); } } } #endif // SHOWMESH_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/UI/VisualizerCommon.h ================================================ #ifndef VISUALIZERCOMMON_H #define VISUALIZERCOMMON_H #include #include #include #include #include #include #include #include #include #include namespace hpe { void SaveVisualizationScreenshot(std::string filename, vtkSmartPointer &renderWindow) { renderWindow->Render(); vtkSmartPointer windowToImageFilter = vtkSmartPointer::New(); windowToImageFilter->SetInput(renderWindow); //windowToImageFilter->SetMagnification(1); //set the resolution of the output image (3 times the current resolution of vtk render window) windowToImageFilter->SetInputBufferTypeToRGBA(); //also record the alpha (transparency) channel windowToImageFilter->Update(); vtkSmartPointer writer = vtkSmartPointer::New(); writer->SetFileName(filename.c_str()); writer->SetInputConnection(windowToImageFilter->GetOutputPort()); writer->Write(); } } #endif // VISUALIZERCOMMON_H ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Grabber/KinectSDKGrabber.cpp ================================================ #include "KinectSDKGrabber.h" #include #include #include namespace hpe { KinectSDKGrabber::KinectSDKGrabber(void) { m_colorCoordinates = new LONG[FrameBufferSize * 2]; } KinectSDKGrabber::~KinectSDKGrabber(void) { delete [] m_colorCoordinates; } IDataStorage::Ptr KinectSDKGrabber::CreateDataStorage() { MapDataStorage::Ptr storage(new MapDataStorage); IDataObject::Ptr mapper(new KinectCoordinatesMapper(m_colorCoordinates)); storage->Set("CoordinatesMapper", mapper); return storage; } void KinectSDKGrabber::PreRun() { HRESULT hr = CreateFirstConnected(); if (FAILED(hr)) { throw HPEException("KinectSDKGrabber::PreRun() - CreateFirstConnected() failed"); } } bool KinectSDKGrabber::GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) { WaitForSingleObject(m_hNextDepthFrameEvent, INFINITE); WaitForSingleObject(m_hNextColorFrameEvent, INFINITE); colorFrame = cv::Mat::zeros(FrameHeight, FrameWidth, CV_8UC4); depthFrame = cv::Mat::zeros(FrameHeight, FrameWidth, CV_32FC1); HRESULT hr = ProcessDepth((float *)depthFrame.data); if (FAILED(hr)) { return false; } hr = ProcessColor(colorFrame); if (FAILED(hr)) { return false; } return true; } HRESULT KinectSDKGrabber::CreateFirstConnected() { INuiSensor *pNuiSensor; HRESULT hr; int iSensorCount = 0; hr = NuiGetSensorCount(&iSensorCount); if (FAILED(hr)) { return hr; } std::cout << "Count" << std::endl; // Look at each Kinect sensor for (int i = 0; i < iSensorCount; ++i) { // Create the sensor so we can check status, if we can't create it, move on to the next hr = NuiCreateSensorByIndex(i, &pNuiSensor); if (FAILED(hr)) { continue; } std::cout << "Created" << std::endl; // Get the status of the sensor, and if connected, then we can initialize it hr = pNuiSensor->NuiStatus(); if (S_OK == hr) { m_pNuiSensor = pNuiSensor; break; } // This sensor wasn't OK, so release it since we're not using it pNuiSensor->Release(); } if (NULL != m_pNuiSensor) { // Initialize the Kinect and specify that we'll be using depth hr = m_pNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR); if (SUCCEEDED(hr)) { // Create an event that will be signaled when depth data is available m_hNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL); // Open a depth image stream to receive depth frames hr = m_pNuiSensor->NuiImageStreamOpen( NUI_IMAGE_TYPE_DEPTH, NUI_IMAGE_RESOLUTION_640x480, 0, 2, m_hNextDepthFrameEvent, &m_pDepthStreamHandle); m_pNuiSensor->NuiImageStreamSetImageFrameFlags(m_pDepthStreamHandle, NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE); std::cout << "Depth" << std::endl; } if (SUCCEEDED(hr)) { m_hNextColorFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL); hr = m_pNuiSensor->NuiImageStreamOpen( NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, 0, 2, m_hNextColorFrameEvent, &m_pColorStreamHandle); std::cout << "Color" << std::endl; } } if (NULL == m_pNuiSensor || FAILED(hr)) { return E_FAIL; } return hr; } HRESULT KinectSDKGrabber::ProcessDepth(float *buffer) { HRESULT hr = S_OK; NUI_IMAGE_FRAME imageFrame; // Attempt to get the depth frame hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame); HRESULT nodata = E_NUI_FRAME_NO_DATA; if (FAILED(hr) && hr != E_NUI_FRAME_NO_DATA) { std::cout << hr << std::endl; return hr; } BOOL nearMode; INuiFrameTexture *pTexture = imageFrame.pFrameTexture; NUI_LOCKED_RECT LockedRect; // Lock the frame data so the Kinect knows not to modify it while we're reading it pTexture->LockRect(0, &LockedRect, NULL, 0); m_pNuiSensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution( NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, FrameBufferSize, (USHORT *)LockedRect.pBits, FrameBufferSize * 2, m_colorCoordinates ); pTexture->UnlockRect(0); // Get the depth image pixel texture hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &imageFrame, &nearMode, &pTexture); if (FAILED(hr)) { goto ReleaseFrame; } // Lock the frame data so the Kinect knows not to modify it while we're reading it pTexture->LockRect(0, &LockedRect, NULL, 0); // Make sure we've received valid data if (LockedRect.Pitch != 0) { const NUI_DEPTH_IMAGE_PIXEL *pBufferRun = reinterpret_cast(LockedRect.pBits); // end pixel is start + width*height - 1 const NUI_DEPTH_IMAGE_PIXEL *pBufferEnd = pBufferRun + FrameBufferSize; int idx = 0; while (pBufferRun < pBufferEnd) { float depth = pBufferRun->depth; if (depth != 0) { buffer[idx] = depth / 1000.f; } else { buffer[idx] = std::numeric_limits::quiet_NaN(); } ++pBufferRun; ++idx; } hr = S_OK; } else { hr = E_FAIL; } // We're done with the texture so unlock it pTexture->UnlockRect(0); pTexture->Release(); ReleaseFrame: // Release the frame m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame); return hr; } HRESULT KinectSDKGrabber::ProcessColor(cv::Mat &mat) { NUI_IMAGE_FRAME imageFrame; HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pColorStreamHandle, 0, &imageFrame); if (FAILED(hr)) { return hr; } NUI_LOCKED_RECT LockedRect; hr = imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0); if (FAILED(hr)) { goto ReleaseFrame; } if (LockedRect.Pitch != 0) { if (0) { int idx = 0; while (idx < FrameBufferSize) { *(mat.data + 3 * idx + 2) = *(LockedRect.pBits + 4 * idx + 2); *(mat.data + 3 * idx + 1) = *(LockedRect.pBits + 4 * idx + 1); *(mat.data + 3 * idx + 0) = *(LockedRect.pBits + 4 * idx + 0); ++idx; } } else { memcpy(mat.data, LockedRect.pBits, LockedRect.size); } hr = S_OK; } else { hr = E_FAIL; } imageFrame.pFrameTexture->UnlockRect(0); ReleaseFrame: hr = m_pNuiSensor->NuiImageStreamReleaseFrame(m_pColorStreamHandle, &imageFrame); return hr; } INuiCoordinateMapper *KinectSDKGrabber::GetCoordinateMapper() { if (m_pNuiSensor) { INuiCoordinateMapper *mapper; HRESULT hr = m_pNuiSensor->NuiGetCoordinateMapper(&mapper); if (SUCCEEDED(hr)) { return mapper; } else { return NULL; } } return NULL; } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Grabber/KinectSDKGrabber.h ================================================ #pragma once #ifndef KINECTSDKGRABBER_H #define KINECTSDKGRABBER_H #include #include #include #include namespace hpe { class KinectCoordinatesMapper : public IDataObject { public: KinectCoordinatesMapper() : correspondencies(nullptr) {} KinectCoordinatesMapper(LONG *c) : correspondencies(c) {} LONG *correspondencies; }; class KinectSDKGrabber : public GrabberBase { public: KinectSDKGrabber(void); ~KinectSDKGrabber(void); INuiCoordinateMapper *GetCoordinateMapper(); protected: bool GetNextFrame(cv::Mat &colorFrame, cv::Mat &depthFrame) override; IDataStorage::Ptr CreateDataStorage() override; void PreRun() override; private: static const int FrameWidth = 640; static const int FrameHeight = 480; static const int FrameBufferSize = FrameWidth *FrameHeight; HRESULT CreateFirstConnected(); HRESULT ProcessDepth(float *buffer); HRESULT ProcessColor(cv::Mat &mat); INuiSensor *m_pNuiSensor; HANDLE m_pDepthStreamHandle; HANDLE m_hNextDepthFrameEvent; HANDLE m_pColorStreamHandle; HANDLE m_hNextColorFrameEvent; LONG *m_colorCoordinates; }; } #endif ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Processor/KinectSDKConverterProcessor.cpp ================================================ #include "KinectSDKConverterProcessor.h" #include #include #include #include #include #include #include #include namespace hpe { typedef union { struct { unsigned char Blue; unsigned char Green; unsigned char Red; unsigned char Alpha; }; float float_value; uint32_t long_value; } RGBValue; KinectSDKConverterProcessor::KinectSDKConverterProcessor() : m_cloudKey("Cloud") { } KinectSDKConverterProcessor::KinectSDKConverterProcessor(std::string key) : m_cloudKey(key) { } pcl::PointCloud::Ptr KinectSDKConverterProcessor::DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr) { cv::Size depthImageSize = depth.size(); float centerX, centerY; float scaleFactorX, scaleFactorY; ComputeCameraIntrinsics(depthImageSize, centerX, centerY, scaleFactorX, scaleFactorY); pcl::PointCloud::Ptr result(new pcl::PointCloud); result->width = depthImageSize.width; result->height = depthImageSize.height; result->is_dense = false; result->points.resize(depthImageSize.width * depthImageSize.height); for (int y = 0; y < depthImageSize.height; ++y) { for (int x = 0; x < depthImageSize.width; ++x) { pcl::PointXYZRGBA &pt = result->at(x, y); float depthValue = depth.at(y, x); if (depthValue == -1000.f || depthValue == 0.0f || boost::math::isnan(depthValue)) { pt.x = pt.y = pt.z = std::numeric_limits::quiet_NaN(); } else { //depthValue /= 1000; pt.x = (static_cast(x) - centerX) * scaleFactorX * depthValue; pt.y = (centerY - static_cast(y)) * scaleFactorY * depthValue; pt.z = depthValue; int depthIndex = x + y * 640; LONG actX = m_correspondencies[depthIndex * 2]; LONG actY = m_correspondencies[depthIndex * 2 + 1]; if (actX >= bgr.cols || actX < 0 || actY < 0 || actY >= bgr.rows) { pt.rgba = 0; } else { cv::Vec4b colorValue = bgr.at(actY, actX); RGBValue color; color.Red = colorValue[2]; color.Green = colorValue[1]; color.Blue = colorValue[0]; pt.rgba = color.long_value; } } } } return result; } void KinectSDKConverterProcessor::ComputeCameraIntrinsics(cv::Size &imageSize, float ¢erX, float ¢erY, float &scaleFactorX, float &scaleFactorY) { int dims[] = {imageSize.width, imageSize.height}; // The 525 factor default is only true for VGA. If not, we should scale scaleFactorX = scaleFactorY = 1 / 525.f * 640.f / dims[0]; centerX = (static_cast(dims[0]) - 1.f) / 2.f; centerY = (static_cast(dims[1]) - 1.f) / 2.f; } void KinectSDKConverterProcessor::Process(IDataStorage::Ptr dataStorage) { std::shared_ptr mapperObject = std::dynamic_pointer_cast(dataStorage->Get("CoordinatesMapper")); if (mapperObject.get() == nullptr) { throw HPEException("KinectSDKConverterProcessor::Process - mapperObject.get() == nullptr"); } m_correspondencies = mapperObject->correspondencies; IDataObject::Ptr object = dataStorage->Get("RawFrames"); RawFrames::Ptr rawFrames = std::dynamic_pointer_cast(object); if (rawFrames.get() == nullptr) { throw HPEException("KinectSDKConverterProcessor::Process - rawFrames.get() == nullptr"); } CloudXYZRGBA::Ptr cloudObject(new CloudXYZRGBA); cloudObject->cloud = DepthRGBToPointCloud(rawFrames->depthFrame, rawFrames->colorFrame); cloudObject->cloud->width = cloudObject->cloud->width * cloudObject->cloud->height; cloudObject->cloud->height = 1; dataStorage->Set(m_cloudKey, cloudObject); } } ================================================ FILE: FaceCept3D/HeadPoseEstimationFramework/WindowsOnly/Processor/KinectSDKConverterProcessor.h ================================================ #ifndef KINECTSDKCONVERTERPROCESSOR_H #define KINECTSDKCONVERTERPROCESSOR_H #include #include #include #include #include namespace hpe { class KinectSDKConverterProcessor : public IProcessor { public: KinectSDKConverterProcessor(); KinectSDKConverterProcessor(std::string key); pcl::PointCloud::Ptr DepthRGBToPointCloud(cv::Mat &depth, cv::Mat &bgr); void Process(IDataStorage::Ptr dataStorage) override; private: void ComputeCameraIntrinsics(cv::Size &imageSize, float ¢erX, float ¢erY, float &scaleFactorX, float &scaleFactorY); LONG *m_correspondencies; std::string m_cloudKey; }; } #endif // KINECTDATACONVERTER_H ================================================ FILE: FaceCept3D/TemplateCreator/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 2.8) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") set(HEAD_POSE_ESTIMATION "HeadPoseEstimationFramework") cmake_policy(SET CMP0015 OLD) project(TemplateCreator) FILE (GLOB SRC_LIST *.cpp *.h) find_package(PCL 1.7 REQUIRED) find_package(OpenCV REQUIRED) include_directories(${PCL_INCLUDE_DIRS} ../${HEAD_POSE_ESTIMATION}) link_directories(${PCL_LIBRARY_DIRS} ${CMAKE_BINARY_DIR} ../${HEAD_POSE_ESTIMATION}-build) add_definitions(${PCL_DEFINITIONS}) add_executable(${PROJECT_NAME} ${SRC_LIST}) target_link_libraries(${PROJECT_NAME} ${PCL_LIBRARIES} ${OpenCV_LIBS} ${HEAD_POSE_ESTIMATION}) ================================================ FILE: FaceCept3D/TemplateCreator/TemplateCreator.cpp ================================================ // TemplateCreator.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include #include #include #include #include #include #include "UIProcessor.h" using namespace hpe; typedef pcl::PointXYZRGBA PointType; int main(int argc, char *argv[]) { HPESettings settings; try { #ifdef __linux__ OpenNIGrabber grabber; #else KinectSDKGrabber grabber; #endif std::shared_ptr templateCreator(new TemplateCreatorProcessor); std::shared_ptr uiProcessor(new UIProcessor("Cloud")); uiProcessor->SetGrabber(&grabber); grabber.AddProcessor(IProcessor::Ptr(new DepthPreprocessingProcessor(1, 3, 5, 5, 1.2))); #ifndef __linux__ grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor)); #endif grabber.AddProcessor(IProcessor::Ptr(new ConverterProcessor(ConverterProcessor::ConverterPtr(new KinectDataConverter)))); grabber.AddProcessor(IProcessor::Ptr(new HeadExtractionProcessor)); grabber.AddProcessor(std::static_pointer_cast(templateCreator)); grabber.AddProcessor(std::static_pointer_cast(uiProcessor)); grabber.Start(); auto t = templateCreator->GetTemplate(); t = Voxelize(t, 0.0025); if (t->size() != 0) { MovingLeastSquaresFilter mls; auto resampledTemplate = mls.Filter(t); PointPicker picker; picker.SetCloud(resampledTemplate); Common::Landmarks l = picker.Pick("Select the eyes and the nose", 3, "Select the left eye center, " "then the right eye center, then the tip of the nose.\n" "NOTE: Make sure use see the front part of the face, not the backward facing part"); pcl::io::savePCDFileBinary(settings.GetString("Template"), *resampledTemplate); hpe::SaveLandmarks(l, settings.GetString("Landmarks")); } } catch (HPEException ex) { std::cout << ex.what() << std::endl; std::cin.ignore(); } return 0; } ================================================ FILE: FaceCept3D/TemplateCreator/TemplateCreator.vcxproj ================================================  Debug Win32 Debug x64 Release Win32 Release x64 {39EBEEC1-29C0-48BA-9858-E593CAEE7757} Win32Proj TemplateCreator Application true Unicode Application true Unicode Application false true Unicode Application false true Unicode true true $(SolutionDir)$(Platform)\$(Configuration)\;$(LibraryPath) ../HeadPoseEstimationFramework;$(IncludePath) false false ../HeadPoseEstimationFramework;$(IncludePath) $(SolutionDir)$(Platform)\$(Configuration)\;$(LibraryPath) Use Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true Use Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true Level3 Use MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true true true Level3 Use MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true true true Create Create Create Create ================================================ FILE: FaceCept3D/TemplateCreator/UIProcessor.cpp ================================================ #include "stdafx.h" #include "UIProcessor.h" UIProcessor::UIProcessor(void) : m_grabber(nullptr), m_templateCreationTriggered(false) { } UIProcessor::UIProcessor(std::string key) : ShowCloudProcessor(key), m_grabber(nullptr), m_templateCreationTriggered(false) { } UIProcessor::~UIProcessor(void) { } void UIProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event) { if (event.keyDown()) { std::string sym = event.getKeySym(); std::cout << sym << std::endl; if (sym == "Escape") { if (m_grabber) { m_grabber->Stop(); } } } } void UIProcessor::SetGrabber(hpe::GrabberBase *grabber) { m_grabber = grabber; } pcl::PointCloud::Ptr UIProcessor::GetTemplate() { return m_template; } hpe::Common::Landmarks UIProcessor::GetLandmarks() { return m_landmarks; } ================================================ FILE: FaceCept3D/TemplateCreator/UIProcessor.h ================================================ #pragma once #ifndef UIPROCESSOR_H #define UIPROCESSOR_H #include #include #include #include class UIProcessor : public hpe::ShowCloudProcessor { public: UIProcessor(void); UIProcessor(std::string key); ~UIProcessor(void); void SetGrabber(hpe::GrabberBase *grabber); pcl::PointCloud::Ptr GetTemplate(); hpe::Common::Landmarks GetLandmarks(); protected: void KeyPressed(const pcl::visualization::KeyboardEvent &event); private: hpe::GrabberBase *m_grabber; bool m_templateCreationTriggered; pcl::PointCloud::Ptr m_template; hpe::Common::Landmarks m_landmarks; std::shared_ptr m_templateCreator; }; #endif ================================================ FILE: FaceCept3D/TemplateCreator/settings.ini ================================================ Template=MyTemplate.pcd Landmarks=MyTemplateLandmarks.txt ================================================ FILE: FaceCept3D/TemplateCreator/stdafx.cpp ================================================ // stdafx.cpp : source file that includes just the standard includes // TemplateCreator.pch will be the pre-compiled header // stdafx.obj will contain the pre-compiled type information #include "stdafx.h" // TODO: reference any additional headers you need in STDAFX.H // and not in this file ================================================ FILE: FaceCept3D/TemplateCreator/stdafx.h ================================================ // stdafx.h : include file for standard system include files, // or project specific include files that are used frequently, but // are changed infrequently // #pragma once #define _CRT_SECURE_NO_WARNINGS #include #undef _CRT_SECURE_NO_WARNINGS #include #include #include #include #include #ifndef __linux__ #include #include #include #else #include #endif #include #include #include #include #include #include ================================================ FILE: FaceCept3D/TemplateTracker/ShowTwoCloudsProcessor.cpp ================================================ #include "stdafx.h" #include "ShowTwoCloudsProcessor.h" #include #include using namespace hpe; ShowTwoCloudsProcessor::ShowTwoCloudsProcessor(void) : m_cloud1Key("Cloud1"), m_cloud2Key("Cloud2"), m_first(true), m_visualizer("Two Clouds") { } ShowTwoCloudsProcessor::~ShowTwoCloudsProcessor(void) { } void ShowTwoCloudsProcessor::SetCloud1Key(std::string key) { m_cloud1Key = key; } void ShowTwoCloudsProcessor::SetCloud2Key(std::string key) { m_cloud2Key = key; } void ShowTwoCloudsProcessor::Process(hpe::IDataStorage::Ptr dataStorage) { CloudXYZRGBA::Ptr cloud1 = dataStorage->GetAndCast(m_cloud1Key); CloudXYZRGBA::Ptr cloud2 = dataStorage->GetAndCast(m_cloud2Key); if (cloud1.get() == nullptr || cloud2.get() == nullptr) { return; } if (m_first) { m_first = false; m_visualizer.addPointCloud(cloud1->cloud, m_cloud1Key); m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, 0, 1, m_cloud1Key); m_visualizer.addPointCloud(cloud2->cloud, m_cloud2Key); m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 1, 0, 0, m_cloud2Key); } else { m_visualizer.updatePointCloud(cloud1->cloud, m_cloud1Key); m_visualizer.updatePointCloud(cloud2->cloud, m_cloud2Key); } m_visualizer.spinOnce(1); PCDGrabber::CloudFileInfo::Ptr fileInfo = dataStorage->GetAndCast("FileInfo"); if (fileInfo.get() != nullptr) { std::string screenshotPath = boost::replace_all_copy(fileInfo->Filename, ".pcd", ".png"); //m_visualizer.saveScreenshot(screenshotPath); } } ================================================ FILE: FaceCept3D/TemplateTracker/ShowTwoCloudsProcessor.h ================================================ #pragma once #ifndef SHOWTWOCLOUDSPROCESSOR_H #define SHOWTWOCLOUDSPROCESSOR_H #include #include class ShowTwoCloudsProcessor : public hpe::IProcessor { public: ShowTwoCloudsProcessor(void); ~ShowTwoCloudsProcessor(void); void SetCloud1Key(std::string key); void SetCloud2Key(std::string key); void Process(hpe::IDataStorage::Ptr dataStorage); private: std::string m_cloud1Key; std::string m_cloud2Key; pcl::visualization::PCLVisualizer m_visualizer; bool m_first; }; #endif ================================================ FILE: FaceCept3D/TemplateTracker/TemplateTracker.cpp ================================================ // TemplateTracker.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "ShowTwoCloudsProcessor.h" #include "VoxelizeProcessor.h" #include "UIProcessor.h" #include "VisuilizeProcessor.h" #include #include #include #include #include #include #include #include #include #include using namespace hpe; int main(int argc, char *argv[]) { HPESettings settings; KinectSDKGrabber grabber; pcl::PointCloud::Ptr templateCloud(new pcl::PointCloud); pcl::io::loadPCDFile(settings.GetString("Template"), *templateCloud); std::shared_ptr uiProcessor(new UIProcessor); uiProcessor->SetGrabber(&grabber); std::shared_ptr preprocessor(new DepthPreprocessingProcessor(1, 3, 13, 7, 1.2)); std::shared_ptr templateTracker(new hpe::TrackingProcessor(templateCloud)); templateTracker->SetTemplateLandmarks(LoadLandmarks(settings.GetString("Landmarks"))); templateTracker->SetSaveLandmakrs(false); std::string cloudToShowKey = "FilteredOriginalCloud"; std::shared_ptr visualizer(new VisuilizeProcessor(cloudToShowKey)); visualizer->SetTrackingProcessor(templateTracker); std::shared_ptr ferProcessor(new hpe::FacialExpressionProcessor(settings.GetString("FERData"))); ferProcessor->SubscribeFacialExpressionReadySignal(boost::bind(&VisuilizeProcessor::HandleFER, visualizer.get(), _1)); std::shared_ptr> functorFilter(new FunctorFilter( [&settings](pcl::PointCloud::Ptr cloud)->pcl::PointCloud::Ptr { return PassThrough(cloud, 0., settings.GetValue("DistanceToShow"), "z"); })); std::shared_ptr filterProcessor(new FilterProcessor(functorFilter, "OriginalCloud", "FilteredOriginalCloud")); grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor("OriginalCloud"))); grabber.AddProcessor(filterProcessor); grabber.AddProcessor(IProcessor::Ptr(new DetectorProcessor(settings.GetString("DetectorData")))); grabber.AddProcessor(preprocessor); grabber.AddProcessor(IProcessor::Ptr(new KinectSDKConverterProcessor("Cloud"))); grabber.AddProcessor(IProcessor::Ptr(new VoxelizeProcessor(0.005))); grabber.AddProcessor(templateTracker); grabber.AddProcessor(ferProcessor); grabber.AddProcessor(visualizer); grabber.Start(); return 0; } ================================================ FILE: FaceCept3D/TemplateTracker/TemplateTracker.vcxproj ================================================  Debug Win32 Debug x64 Release Win32 Release x64 {68A9917E-A682-4013-8D30-389AB45A9E0D} Win32Proj TemplateTracker Application true Unicode Application true Unicode Application false true Unicode Application false true Unicode true true ../HeadPoseEstimationFramework;$(IncludePath) $(SolutionDir)$(Platform)\$(Configuration)\;$(LibraryPath) false false ../HeadPoseEstimationFramework;$(IncludePath) $(SolutionDir)$(Platform)\$(Configuration)\;$(LibraryPath) Use Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true Use Level3 Disabled WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) true false Console true /NODEFAULTLIB:LIBCMTD.lib %(AdditionalOptions) Level3 Use MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) Console true true true Level3 Use MaxSpeed true true WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) true true Console true true true HeadPoseEstimationFramework_Release.lib;Kinect10.lib;opengl32.lib;MapReduceMPI.lib;mpistubs.lib;QVTK.lib;vtkalglib.lib;vtkCharts.lib;vtkCommon.lib;vtkDICOMParser.lib;vtkexoIIc.lib;vtkexpat.lib;vtkFiltering.lib;vtkfreetype.lib;vtkftgl.lib;vtkGenericFiltering.lib;vtkGeovis.lib;vtkGraphics.lib;vtkhdf5.lib;vtkHybrid.lib;vtkImaging.lib;vtkInfovis.lib;vtkIO.lib;vtkjpeg.lib;vtklibxml2.lib;vtkmetaio.lib;vtkNetCDF.lib;vtkNetCDF_cxx.lib;vtkpng.lib;vtkproj4.lib;vtkRendering.lib;vtksqlite.lib;vtksys.lib;vtktiff.lib;vtkverdict.lib;vtkViews.lib;vtkVolumeRendering.lib;vtkWidgets.lib;vtkzlib.lib;opencv_calib3d2410.lib;opencv_contrib2410.lib;opencv_core2410.lib;opencv_features2d2410.lib;opencv_flann2410.lib;opencv_gpu2410.lib;opencv_highgui2410.lib;opencv_imgproc2410.lib;opencv_legacy2410.lib;opencv_ml2410.lib;opencv_nonfree2410.lib;opencv_objdetect2410.lib;opencv_ocl2410.lib;opencv_photo2410.lib;opencv_stitching2410.lib;opencv_superres2410.lib;opencv_ts2410.lib;opencv_video2410.lib;opencv_videostab2410.lib;qhullstatic.lib;pcl_common_release.lib;pcl_features_release.lib;pcl_filters_release.lib;pcl_io_ply_release.lib;pcl_io_release.lib;pcl_kdtree_release.lib;pcl_keypoints_release.lib;pcl_octree_release.lib;pcl_outofcore_release.lib;pcl_people_release.lib;pcl_recognition_release.lib;pcl_registration_release.lib;pcl_sample_consensus_release.lib;pcl_search_release.lib;pcl_segmentation_release.lib;pcl_surface_release.lib;pcl_tracking_release.lib;pcl_visualization_release.lib;flann_cpp_s.lib;libboost_atomic-vc100-mt-1_55.lib;libboost_chrono-vc100-mt-1_55.lib;libboost_context-vc100-mt-1_55.lib;libboost_coroutine-vc100-mt-1_55.lib;libboost_date_time-vc100-mt-1_55.lib;libboost_exception-vc100-mt-1_55.lib;libboost_filesystem-vc100-mt-1_55.lib;libboost_graph-vc100-mt-1_55.lib;libboost_iostreams-vc100-mt-1_55.lib;libboost_locale-vc100-mt-1_55.lib;libboost_log-vc100-mt-1_55.lib;libboost_log_setup-vc100-mt-1_55.lib;libboost_math_c99-vc100-mt-1_55.lib;libboost_math_c99f-vc100-mt-1_55.lib;libboost_math_c99l-vc100-mt-1_55.lib;libboost_math_tr1-vc100-mt-1_55.lib;libboost_math_tr1f-vc100-mt-1_55.lib;libboost_math_tr1l-vc100-mt-1_55.lib;libboost_prg_exec_monitor-vc100-mt-1_55.lib;libboost_program_options-vc100-mt-1_55.lib;libboost_random-vc100-mt-1_55.lib;libboost_regex-vc100-mt-1_55.lib;libboost_serialization-vc100-mt-1_55.lib;libboost_signals-vc100-mt-1_55.lib;libboost_system-vc100-mt-1_55.lib;libboost_test_exec_monitor-vc100-mt-1_55.lib;libboost_thread-vc100-mt-1_55.lib;libboost_timer-vc100-mt-1_55.lib;libboost_unit_test_framework-vc100-mt-1_55.lib;libboost_wave-vc100-mt-1_55.lib;libboost_wserialization-vc100-mt-1_55.lib;phonon4.lib;QAxContainer.lib;QAxServer.lib;Qt3Support4.lib;QtCLucene4.lib;QtCore4.lib;QtDeclarative4.lib;QtDesigner4.lib;QtDesignerComponents4.lib;QtGui4.lib;QtHelp4.lib;qtmain.lib;QtMultimedia4.lib;QtNetwork4.lib;QtOpenGL4.lib;QtScript4.lib;QtScriptTools4.lib;QtSql4.lib;QtSvg4.lib;QtTest4.lib;QtUiTools.lib;QtWebKit4.lib;QtXml4.lib;QtXmlPatterns4.lib;%(AdditionalDependencies) Create Create Create Create ================================================ FILE: FaceCept3D/TemplateTracker/UIProcessor.cpp ================================================ #include "stdafx.h" #include "UIProcessor.h" #include #include UIProcessor::UIProcessor(void) : m_grabber(nullptr), m_templateCreationTriggered(false) { } UIProcessor::UIProcessor(std::string key) : m_grabber(nullptr), m_templateCreationTriggered(false) { } UIProcessor::~UIProcessor(void) { } void UIProcessor::KeyPressed(const pcl::visualization::KeyboardEvent &event) { if (event.keyDown()) { std::string sym = event.getKeySym(); std::cout << sym << std::endl; if (sym == "Escape") { if (m_grabber) { m_grabber->Stop(); } } } } void UIProcessor::SetGrabber(hpe::GrabberBase *grabber) { m_grabber = grabber; } void UIProcessor::Process(hpe::IDataStorage::Ptr dataStorage) { using namespace hpe; RawFrames::Ptr frames = dataStorage->GetAndCastNotNull("RawFrames"); cv::imshow("color", frames->colorFrame); cv::imshow("depth", frames->depthFrame); int key = cv::waitKey(1); if (key != -1) { m_grabber->Stop(); } } ================================================ FILE: FaceCept3D/TemplateTracker/UIProcessor.h ================================================ #pragma once #ifndef UIPROCESSOR_H #define UIPROCESSOR_H #include #include #include class UIProcessor : public hpe::IProcessor { public: UIProcessor(void); UIProcessor(std::string key); ~UIProcessor(void); void SetGrabber(hpe::GrabberBase *grabber); pcl::PointCloud::Ptr GetTemplate(); void Process(hpe::IDataStorage::Ptr dataStorage); protected: void KeyPressed(const pcl::visualization::KeyboardEvent &event); private: hpe::GrabberBase *m_grabber; bool m_templateCreationTriggered; pcl::PointCloud::Ptr m_template; std::shared_ptr m_templateCreator; }; #endif ================================================ FILE: FaceCept3D/TemplateTracker/VisuilizeProcessor.cpp ================================================ #include "stdafx.h" #include "VisuilizeProcessor.h" #include #include #include #include #include #include using namespace hpe; VisuilizeProcessor::VisuilizeProcessor(std::string cloudKey) : m_cloudKey(cloudKey), m_landmarksKey("Landmarks"), m_visualizer("VisualizerProcessor"), m_first(true), m_saveScreenshots(false), m_haveFerData(false) { InitExpressionLabels(); } VisuilizeProcessor::VisuilizeProcessor(bool saveScreenshots) : m_cloudKey("Cloud"), m_landmarksKey("Landmarks"), m_visualizer("VisualizerProcessor"), m_first(true), m_saveScreenshots(saveScreenshots), m_haveFerData(false) { InitExpressionLabels(); } VisuilizeProcessor::~VisuilizeProcessor(void) { } void VisuilizeProcessor::Process(hpe::IDataStorage::Ptr dataStorage) { CloudXYZRGBA::Ptr cloud = dataStorage->GetAndCast(m_cloudKey); LandmarksObject::Ptr landmarks = dataStorage->GetAndCast>(m_landmarksKey); if (cloud.get() == nullptr || landmarks.get() == nullptr) { return; } pcl::PointCloud::Ptr eyes(new pcl::PointCloud); eyes->push_back(landmarks->landmarks[0].point); eyes->push_back(landmarks->landmarks[1].point); pcl::ModelCoefficients cylinderCoefficients; cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.x); cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.y); cylinderCoefficients.values.push_back(landmarks->landmarks[2].point.z); cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.x - landmarks->landmarks[2].point.x) / 10); cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.y - landmarks->landmarks[2].point.y) / 10); cylinderCoefficients.values.push_back((landmarks->landmarks[3].point.z - landmarks->landmarks[2].point.z) / 10); cylinderCoefficients.values.push_back(0.002); Eigen::Vector3f angles = VectorToEulerAngles(landmarks->landmarks[2].point.getVector3fMap() - landmarks->landmarks[3].point.getVector3fMap()); std::string anglesText = (boost::format("Head pose in degrees:\n Yaw %5.2f\n Tilt %5.2f\n Roll %5.2f") % angles(0) % angles(1) % angles(2)).str(); if (m_first) { m_first = false; m_visualizer.registerKeyboardCallback(&VisuilizeProcessor::KeyboardEventCallback, this); m_visualizer.addCylinder(cylinderCoefficients); m_visualizer.setShapeRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, ((float)0x73) / 255, ((float)0x3C) / 255, "cylinder"); m_visualizer.addPointCloud(cloud->cloud, m_cloudKey); m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, m_cloudKey); m_visualizer.addPointCloud(eyes, m_landmarksKey); m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 25, 0, 0, m_landmarksKey); m_visualizer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, m_landmarksKey); m_visualizer.addText(anglesText, 25, 175, 17, 1, 1, 1, "anglesText"); //m_visualizer.addText("Facial expression:", 25, 100, 17, 1, 1, 1, "anglesTextCaption"); m_visualizer.setBackgroundColor(0, 0, 0); if (m_saveScreenshots) { if (boost::filesystem::exists("camera.cam")) { pcl::visualization::Camera camera; LoadCamera(camera, "camera.cam"); m_visualizer.setCameraParameters(camera); } else { while (m_visualizer.wasStopped() == false) { m_visualizer.spinOnce(100); } std::vector cameras; m_visualizer.getCameras(cameras); pcl::visualization::Camera camera = cameras[0]; SaveCamera(camera, "camera.cam"); } } } else { m_visualizer.removeShape("cylinder"); m_visualizer.addCylinder(cylinderCoefficients); m_visualizer.removeShape("anglesText"); m_visualizer.addText(anglesText, 25, 175, 17, 1, 1, 1, "anglesText"); m_visualizer.setShapeRenderingProperties(pcl::visualization::PCL_VISUALIZER_COLOR, 0, ((float)0x73) / 255, ((float)0x3C) / 255, "cylinder"); m_visualizer.updatePointCloud(cloud->cloud, m_cloudKey); m_visualizer.updatePointCloud(eyes, m_landmarksKey); // if (m_haveFerData) // { // int delta = 19; //int x = 20; //int y = 140 - delta; // auto recognizedExpression = std::max_element(m_ferData.begin(), m_ferData.end()); // int recognizedExpressionIndex = std::distance(m_ferData.begin(), recognizedExpression); // for (int i = 0; i < m_ferData.size(); i++) // { // double colorScale = i == recognizedExpressionIndex ? 2 : 1; // std::string message = (boost::format(" %s %f") % m_expressionLabels[i] % m_ferData[i]).str(); // std::string textObjectLabel = (boost::format("ferText%1%") % i).str(); // m_visualizer.removeShape(textObjectLabel); // m_visualizer.addText(message, x, y, 17, (2 - colorScale) * 0.5, (2 - colorScale) * 0.5, colorScale * 0.5, textObjectLabel); // y -= delta; // } // } if (m_haveFerData) { int x = 20; int y = 75; int delta = 16; auto recognizedExpression = std::max_element(m_ferData.begin(), m_ferData.end()); int recognizedExpressionIndex = std::distance(m_ferData.begin(), recognizedExpression); for (int i = 0; i < m_ferData.size(); i++) { double colorScale = i == recognizedExpressionIndex ? 2 : 1; std::string message = (boost::format(" %1%: %2%") % m_expressionLabels[i] % m_ferData[i]).str(); std::string textObjectLabel = (boost::format("ferText%1%") % i).str(); m_visualizer.removeShape(textObjectLabel); m_visualizer.addText(message, x, y, 17, colorScale * 0.5, colorScale * 0.5, colorScale * 0.5, textObjectLabel); y += delta; } } } m_visualizer.spinOnce(1); if (m_saveScreenshots) { PCDGrabber::CloudFileInfo::Ptr fileInfo = dataStorage->GetAndCast("FileInfo"); if (fileInfo.get() != nullptr) { std::string screenshotPath = boost::replace_all_copy(fileInfo->Filename, ".pcd", ".png"); m_visualizer.saveScreenshot(screenshotPath); } } } Eigen::Vector3f VisuilizeProcessor::VectorToEulerAngles(Eigen::Vector3f v) { Eigen::Vector3f normed = v / v.norm(); float x = normed(0); float y = normed(1); float z = normed(2); Eigen::Vector3f result; result(0) = std::atan2(z, x) - M_PI / 2; // yaw result(1) = -std::atan2(y, z); // tilt result(2) = std::atan(y / x); // roll return result * 180 / M_PI; } void VisuilizeProcessor::SaveCamera(pcl::visualization::Camera &camera, std::string file) { std::ofstream str(file); str << camera.clip[0] << std::endl; str << camera.clip[1] << std::endl; str << camera.focal[0] << std::endl; str << camera.focal[1] << std::endl; str << camera.focal[2] << std::endl; str << camera.fovy << std::endl; str << camera.pos[0] << std::endl; str << camera.pos[1] << std::endl; str << camera.pos[2] << std::endl; str << camera.view[0] << std::endl; str << camera.view[1] << std::endl; str << camera.view[2] << std::endl; str << camera.window_pos[0] << std::endl; str << camera.window_pos[1] << std::endl; str << camera.window_size[0] << std::endl; str << camera.window_size[1] << std::endl; } void VisuilizeProcessor::LoadCamera(pcl::visualization::Camera &camera, std::string file) { std::ifstream str(file); str >> camera.clip[0]; str >> camera.clip[1]; str >> camera.focal[0]; str >> camera.focal[1]; str >> camera.focal[2]; str >> camera.fovy; str >> camera.pos[0]; str >> camera.pos[1]; str >> camera.pos[2]; str >> camera.view[0]; str >> camera.view[1]; str >> camera.view[2]; str >> camera.window_pos[0]; str >> camera.window_pos[1]; str >> camera.window_size[0]; str >> camera.window_size[1]; } void VisuilizeProcessor::HandleFER(std::vector ferdata) { m_haveFerData = true; m_ferData = ferdata; } void VisuilizeProcessor::InitExpressionLabels() { m_expressionLabels.clear(); m_expressionLabels.push_back("Neutral"); m_expressionLabels.push_back("Happy"); m_expressionLabels.push_back("Surprise"); } void VisuilizeProcessor::KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender) { if (event.getKeySym() == "space") { VisuilizeProcessor *_this = static_cast(sender); if (_this->m_trackingProcessor.get() != nullptr) { _this->m_trackingProcessor->ResetTracker(); } } } ================================================ FILE: FaceCept3D/TemplateTracker/VisuilizeProcessor.h ================================================ #pragma once #ifndef VISUALIZEPROCESSOR_H #define VISUALIZEPROCESSOR_H #include #include #include class VisuilizeProcessor : public hpe::IProcessor { public: VisuilizeProcessor(std::string cloudKey = "Cloud"); VisuilizeProcessor(bool saveScreenshots); ~VisuilizeProcessor(void); void Process(hpe::IDataStorage::Ptr dataStorage); void HandleFER(std::vector ferdata); void SetTrackingProcessor(std::shared_ptr processor) { m_trackingProcessor = processor; } private: pcl::visualization::PCLVisualizer m_visualizer; bool m_first; bool m_saveScreenshots; bool m_haveFerData; std::vector m_ferData; std::string m_cloudKey; std::string m_landmarksKey; std::vector m_expressionLabels; std::shared_ptr m_trackingProcessor; static void KeyboardEventCallback(const pcl::visualization::KeyboardEvent &event, void *sender); Eigen::Vector3f VectorToEulerAngles(Eigen::Vector3f v); void SaveCamera(pcl::visualization::Camera &camera, std::string file); void LoadCamera(pcl::visualization::Camera &camera, std::string file); void InitExpressionLabels(); }; #endif ================================================ FILE: FaceCept3D/TemplateTracker/VoxelizeProcessor.cpp ================================================ #include "stdafx.h" #include "VoxelizeProcessor.h" #include #include VoxelizeProcessor::VoxelizeProcessor(float size) : m_size(size) { } VoxelizeProcessor::~VoxelizeProcessor(void) { } void VoxelizeProcessor::Process(hpe::IDataStorage::Ptr dataStorage) { using namespace hpe; CloudXYZRGBA::Ptr cloudObject = dataStorage->GetAndCastNotNull("Cloud", "VoxelizeProcessor::Process - Cloud is null"); cloudObject->cloud = PassThrough(cloudObject->cloud, 0, 1.5, "z", false); cloudObject->cloud = Voxelize(cloudObject->cloud, m_size); } ================================================ FILE: FaceCept3D/TemplateTracker/VoxelizeProcessor.h ================================================ #pragma once #include class VoxelizeProcessor : public hpe::IProcessor { public: VoxelizeProcessor(float size = 0.005f); ~VoxelizeProcessor(void); void Process(hpe::IDataStorage::Ptr dataStorage); private: float m_size; }; ================================================ FILE: FaceCept3D/TemplateTracker/settings.ini ================================================ DistanceToShow=1.2 StoreFolder=d:\Documents\Image Databases\video DetectorData=d:\Projects\HeadPoseEstimation\Data\DetectorDataDali FERData=d:\Projects\HeadPoseEstimation\Data\FERDataDali Template=d:\Projects\HeadPoseEstimation\Data\sergey.pcd Landmarks=landmarks.bnd ShapePredictor=d:\Projects\HeadPoseEstimation\Data\shape_predictor_68_face_landmarks.dat ================================================ FILE: FaceCept3D/TemplateTracker/stdafx.cpp ================================================ // stdafx.cpp : source file that includes just the standard includes // TemplateTracker.pch will be the pre-compiled header // stdafx.obj will contain the pre-compiled type information #include "stdafx.h" // TODO: reference any additional headers you need in STDAFX.H // and not in this file ================================================ FILE: FaceCept3D/TemplateTracker/stdafx.h ================================================ // stdafx.h : include file for standard system include files, // or project specific include files that are used frequently, but // are changed infrequently // #pragma once #include #include #include #include #include #include #include #include #include #include #ifndef __linux__ #include #include #include #else #include #endif ================================================ FILE: LICENSE ================================================ The MIT License (MIT) Copyright (c) 2015 sergeytulyakov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ # FaceCept3D FaceCept3D: 3D Face Analysis and Recognition ## Introduction FaceCept3D is a realtime framework for 3D face analysis and recognition. It contains a set of extendible components that can be combined to fulfil a specific task. At this step we open source the following functionality: 1. Person-specific template creation 2. Extreme head pose estimation 3. Facial expression analysis FaceCept3D is based on the following works: * Robust Real-Time Extreme Head Pose Estimation. [[pdf](https://github.com/sergeytulyakov/FaceCept3D/blob/master/papers/Robust%20Real-Time%20Extreme%20Head%20Pose%20Estimation.pdf)] * Facial expression recognition under a wide range of head poses. [[pdf](https://github.com/sergeytulyakov/FaceCept3D/blob/master/papers/Facial%20Expression%20Recognition%20under%20a%20Wide%20Range%20of%20Head%20Poses.pdf)] Here is an example image of head pose estimation using FaceCept3D. ![Head Pose Estimation Examples](https://github.com/sergeytulyakov/FaceCept3D/blob/master/docs/home/HalfHeadPoseEstimationExamples.jpg) See FaceCept3D [wiki](https://github.com/sergeytulyakov/FaceCept3D/wiki) for more details. ## Requirements To install FaceCept3D you need to have the following libraries available * [Point Cloud Library PCL](http://www.pointclouds.org/) version 1.7 * [OpenCV](http://opencv.org/) >= version 2.4 * [Qt](http://download.qt.io/archive/qt/4.8/4.8.6/) version 4.8 (required by VTK 5.6) and their dependencies. In addition, if you plan to use a depth sensor, you will to install the driver. Currently we support only the Microsoft Kinect 1.0 sensor. However, the code can be easily extended to most of the available RGB-D sensors. To use MS Kinect sensors: * On windows install their [SDK (v 1.8)](http://www.microsoft.com/en-us/download/details.aspx?id=40278) ## Installation FaceCept3D was tested on Windows and Linux. Mac users should follow linux installation instructions. ### Linux * **OpenNI and SensorKin drivers**. A great guide is [here](https://bitbucket.org/samirmenon/scl-manips-v2/wiki/vision/kinect). * **Prerequsities**. Install the following libraries using your package manager: ```Shell boost, eigen, flann, vtk (v 5.6), qhull, opencv, tbb, qt (v 4.8) ``` For Ubuntu 14.04 the following command will do the trick: ```Shell sudo apt-get install libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libqhull-dev libopencv-dev libtbb-dev libqt4-dev ``` In addition, you need to install cmake if you don't have it: ```Shell sudo apt-get install cmake ``` * **Install PCL**. We need to build and install pcl, since the version [here](http://pointclouds.org/downloads/linux.html) is built without *-std=c++11* modifier. This [guide](http://pointclouds.org/downloads/source.html) will help you build pcl from source. * **Check that everything works**. Try running pcl_openni_viewier. If you don't see any output, then there is something wrong with your installation. * **Build FaceCept3D**. ```Shell git clone https://github.com/sergeytulyakov/FaceCept3D.git cd FaceCept3D mkdir build cd build cmake .. make ``` ### Windows On Windows platform there are two ways of build FaceCept3D: 1. Using CMake to generate an *.sln file 2. Using the provided *.sln file. It is somewhat more convenient to use the latter variant since the final solution file is cleaner than the one generated by CMake. If you still want to use CMake, you will need to manually enter all necessary paths. To simplify the process we provide all the dependencies in a binary format. All libs were compiled using msvc2010-win64. You can still use it in a latter version of Visual Studio, but you need to have msvc2010 toolchain installed. * **Install Microsoft Kinect Driver 1.8**. Download it [here](http://www.microsoft.com/en-us/download/details.aspx?id=40278) and install. * **Download and unpack precompiled dependencies**. Download them [here](http://sourceforge.net/projects/facecept3d-3rdparty/files/msvc2010-win64/) * **Set up environmental variables** You may find useful [Rapid Environment Editor](http://www.rapidee.com/en/download) for setting the environment on Windows. Set KINECTSDK10_DIR and FACECEPT3D_3RDPARTY_DIR. For example, mine are pointing to ```Shell KINECTSDK10_DIR=c:\Program Files\Microsoft SDKs\Kinect\v1.8 FACECEPT3D_3RDPARTY_DIR=d:\Dependencies\FaceCept3D-3rdparty\ ``` You also need to add the following folders to your path: ```Shell %FACECEPT3D_3RDPARTY_DIR%\Qt\bin %FACECEPT3D_3RDPARTY_DIR%\PCL\bin %FACECEPT3D_3RDPARTY_DIR%\Opencv\bin %FACECEPT3D_3RDPARTY_DIR%\FLANN\bin %FACECEPT3D_3RDPARTY_DIR%\QHull\bin ``` The rest is configured in the Project Properties files. If something doesn't work feel free to inspect the *.props files in the Config folder. If you want to use your versions of the dependencies, you will have to change the *.props files. Open 3dheadposeestimation.sln using your version of Visual Studio and try to build. If you have a newer Visual Studio than 2010, **do not agree to convert the solution to a newer toolchain**, since the dependencies are built with msvc2010-win64 it will not link. ### What to do next Have a look at our [wiki](https://github.com/sergeytulyakov/FaceCept3D/wiki) pages: 1. [Getting Started](https://github.com/sergeytulyakov/FaceCept3D/wiki/1.-Gettings-Started) 2. [Point Cloud Registration](https://github.com/sergeytulyakov/FaceCept3D/wiki/2.-Point-Cloud-Registration) 3. [Personalized Template Creation with FaceCept3D](https://github.com/sergeytulyakov/FaceCept3D/wiki/3.-Personalized-Template-Creation) ## Citing FaceCept3D If you use FaceCept3D in your research, please cite one of the following (or both): @inproceedings{Tulyakov2014, author = {Tulyakov, S. and Vieriu, R. L. and Semeniuta, S. and Sebe, N.}, booktitle = {International Conference on Pattern Recognition}, title = {{Robust Real-Time Extreme Head Pose Estimation}}, year = {2014}, } @inproceedings{Vieriu2015, author = {Vieriu, R. L. and Tulyakov, S. and Sangineto, E. and Semeniuta, S. and Sebe, N.}, booktitle = {Automatic Face and Gesture Recognition}, title = {{Facial Expression Recognition under a Wide Range of Head Poses}}, year = {2015}, }