Repository: gesanqiu/gstreamer-example Branch: main Commit: fd320e812221 Files: 94 Total size: 39.2 MB Directory structure: gitextract_uf0uoqov/ ├── .gitignore ├── LICENSE ├── README.md ├── ai_integration/ │ ├── deepstream/ │ │ ├── CMakeLists.txt │ │ ├── doc/ │ │ │ └── video-pipeline.dot │ │ ├── inc/ │ │ │ ├── Common.h │ │ │ ├── DoubleBufferCache.h │ │ │ ├── Logger.h │ │ │ └── VideoPipeline.h │ │ ├── sp_mp4.json │ │ ├── sp_rtsp.json │ │ ├── sp_uc.json │ │ └── src/ │ │ ├── VideoPipeline.cpp │ │ └── main.cpp │ ├── test_30fps.h264 │ ├── test_30fps.ts │ └── video-pipeline.dot ├── application_develop/ │ ├── GstPadProbe/ │ │ ├── CMakeLists.txt │ │ ├── README.md │ │ ├── doc/ │ │ │ └── gstpadprobe.md │ │ ├── inc/ │ │ │ ├── Common.h │ │ │ ├── DoubleBufferCache.h │ │ │ └── VideoPipeline.h │ │ └── src/ │ │ ├── VideoPipeline.cpp │ │ └── main.cpp │ ├── README.md │ ├── app/ │ │ ├── CMakeLists.txt │ │ ├── README.md │ │ ├── doc/ │ │ │ ├── app.md │ │ │ ├── appsink.md │ │ │ └── appsrc.md │ │ ├── inc/ │ │ │ ├── Common.h │ │ │ ├── DoubleBufferCache.h │ │ │ ├── appsink.h │ │ │ └── appsrc.h │ │ └── src/ │ │ ├── appsink.cpp │ │ ├── appsrc.cpp │ │ └── main.cpp │ ├── build_pipeline/ │ │ ├── CMakeLists.txt │ │ ├── README.md │ │ ├── doc/ │ │ │ └── build_pipeline.md │ │ ├── inc/ │ │ │ ├── Common.h │ │ │ └── VideoPipeline.h │ │ └── src/ │ │ ├── VideoPipeline.cpp │ │ ├── gst_element_factory_make.cpp │ │ └── gst_parse_launch.cpp │ ├── custom_user_plugin/ │ │ ├── CMakeLists.txt │ │ ├── README.md │ │ ├── config.h │ │ ├── gstoverlay.c │ │ └── gstoverlay.h │ └── uridecodebin/ │ ├── CMakeLists.txt │ ├── README.md │ ├── doc/ │ │ └── uridecodebin.md │ ├── inc/ │ │ ├── Common.h │ │ ├── DoubleBufferCache.h │ │ └── VideoPipeline.h │ └── src/ │ ├── VideoPipeline.cpp │ └── main.cpp ├── basic_theory/ │ ├── README.md │ ├── app_dev_manual/ │ │ ├── autoplugging.md │ │ ├── fundamental.md │ │ ├── interfaces.md │ │ ├── metadata.md │ │ └── threads.md │ ├── basic_tutorial/ │ │ ├── dynamic_pipelines.md │ │ ├── gstreamer_concepts.md │ │ ├── hello_world.md │ │ ├── media_format.md │ │ ├── multithread.md │ │ └── short_cutting_pipeline.md │ └── playback/ │ ├── hardware_decode.md │ ├── playbin.md │ ├── playbin_sink.md │ ├── progressive_stream.md │ ├── shortcut_pipeline.md │ └── subtitle.md ├── deepstream/ │ ├── DeepStreamSample.md │ └── nvdsosd.md ├── postscript.md ├── qti_gst_plugins/ │ └── qtioverlay/ │ ├── README.md │ ├── qtimlmeta/ │ │ ├── CMakeLists.txt │ │ ├── ml_meta.c │ │ └── ml_meta.h │ ├── qtioverlay/ │ │ ├── CMakeLists.txt │ │ ├── config.h.in │ │ ├── gstoverlay.cc │ │ └── gstoverlay.h │ └── qtiqmmf_overlay/ │ ├── CMakeLists.txt │ ├── overlay_blit_kernel.cl │ ├── qmmf_overlay.cc │ └── qmmf_overlay_item.h └── useful_tricks/ ├── rtspsrc_1.md └── uridecodebin_1.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ # Prerequisites *.d # Compiled Object files *.slo *.lo *.o *.obj # Precompiled Headers *.gch *.pch # Compiled Dynamic libraries *.so *.dylib *.dll # Fortran module files *.mod *.smod # Compiled Static libraries *.lai *.la *.a *.lib # Executables *.exe *.out *.app # Build Files build/ # MP4 files *.mp4 # VS Code config files .vscode/ ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2021 Ricardo Lu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ # GStreamer-example [![](https://img.shields.io/badge/Auther-@RicardoLu-red.svg)](https://github.com/gesanqiu)![](https://img.shields.io/badge/Version-2.0.0-blue.svg)[![](https://img.shields.io/github/stars/gesanqiu/gstreamer-example.svg?style=social&label=Stars)](https://github.com/gesanqiu/gstreamer-example) [GStreamer](https://gstreamer.freedesktop.org/documentation/index.html?gi-language=c)是一个非常强大和通用的用于开发流媒体应用程序的框架。GStreamer框架的许多优点都来自于它的模块化:GStreamer可以无缝地合并新的插件模块,但是由于模块化和强大的功能往往以更大的复杂度为代价,开发新的应用程序并不总是简单。 出于以下两点原因,让我萌生了发起这个项目的想法: - 网络上关于GStreamer的开发文档比较少,几乎只能依靠官方的[API Reference](https://gstreamer.freedesktop.org/documentation/libs.html?gi-language=c)和[Tutorials](https://gstreamer.freedesktop.org/documentation/tutorials/index.html?gi-language=c)英文文档; - 目前项目只有我一个人在维护,因此更多是出于我个人开发的学习记录,但欢迎各位的加入。 ## 更新日志 - 2022.02.06:更新interfaces教程。 - 2022.02.06:解决ai_intergration pipeline bugs。 - 2022.02.03:更新ai_integration pipeline ver2.0,增加usb camera和rtmp推流支持。 - 2022.01.26:更新meradata教程。 - 2022.11.06:更新threads教程。 - 2022.09.12:更新`uridecodebin`源码剖析①。 - 2022.09.10:更新`rtspsrc`源码剖析①。 - 2022.07.17:更新基于deepstream-6.1开发的pipeline,后续用于集成yolov5s.trt模型。 - 2022.02.10:增加更新日志,修改更新计划,整理已更新内容,删除多余的初始化文档,后续随缘更新。 - 2022.01.25:将Tutorial文档merge进来。 - 2021.10.31:更新`nvdsosd`插件教程。 - 2021.09,09:更新GstPadProbe教程。 - 2021.09.04:增加audio轨道处理分支。 - 2021.08.31:更新`uridecodebin`插件教程。 - 2021.08.29:更新`appsink/appsrc`插件教程。 - 2021.08.27:更新pipeline构建教程。 - 2021.08.26:更新`qtioverlay`插件教程。 - 2021.08.24:初始化提交。 ## 更新计划‌ ### 基础理论 本章节主要是[GStreamer Tutorial](https://gstreamer.freedesktop.org/documentation/tutorials/index.html?gi-language=c)的翻译。 ### 应用开发 本章节将结合我的开发经历,讲解使用GStreamer开发一个视频流应用会需要用到的基础技术。 - 构建pipeline的两种方式:`gst_parse_launch()`和`gst_element_factory_make()`(done) - uridecodebin详解(done) - appsink/appsrc(done) - GstPadProbe(done) - 自定义plugin ### 平台定制plugins 本章节将介绍`Qualcomm`和`Nvidia`两个平台的一些定制插件,由于我现在更多在`Qualcomm`平台上进行开发,并且`Nvidia`有相对健全的Issue机制和论坛维护,**因此**`Nvidia`**仅作为补充内容,更新计划待定**。 - [Qualcomm GStreamer Plugins](https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-reference-manual/application-semantics/gstreamer-plugins) - qtivdec - qtioverlay - [Nvidia GStreamer Plugin Overview](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_Intro.html) - nvdsosd **注**:作者才疏学浅,如有纰漏,欢迎指正。 ## 贡献者 翻译:[@Yinan Fu](https://github.com/fengxueem) 校对:[@thetffs](https://github.com/thetffs) ## 联系方式‌ - 在线阅读:https://ricardolu.gitbook.io/gstreamer/ - Github:https://github.com/gesanqiu/gstreamer-example - E-mail:[shenglu1202@163.com](mailto:shenglu1202@163.com) ================================================ FILE: ai_integration/deepstream/CMakeLists.txt ================================================ # create by Ricardo Lu in 07/15/2022 cmake_minimum_required(VERSION 3.10) project(ds-yolov5s) set(CMAKE_CXX_STANDARD 17) find_package(OpenCV REQUIRED) find_package(spdlog REQUIRED) include(FindPkgConfig) pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) pkg_check_modules(GLIB REQUIRED glib-2.0) pkg_check_modules(GFLAGS REQUIRED gflags) pkg_check_modules(JSONCPP REQUIRED jsoncpp) set(DeepStream_ROOT "/opt/nvidia/deepstream/deepstream-6.1") set(DeepStream_INCLUDE_DIRS "${DeepStream_ROOT}/sources/includes") set(DeepStream_LIBRARY_DIRS "${DeepStream_ROOT}/lib") message(STATUS "GST: ${GST_INCLUDE_DIRS},${GST_LIBRARY_DIRS},${GST_LIBRARIES}") message(STATUS "GSTAPP:${GSTAPP_INCLUDE_DIRS},${GSTAPP_LIBRARY_DIRS},${GSTAPP_LIBRARIES}") message(STATUS "GLIB: ${GLIB_INCLUDE_DIRS},${GLIB_LIBRARY_DIRS},${GLIB_LIBRARIES}") message(STATUS "JSON: ${JSON_INCLUDE_DIRS},${JSON_LIBRARY_DIRS},${JSON_LIBRARIES}") message(STATUS "GFLAGS:${GFLAGS_INCLUDE_DIRS},${GFLAGS_LIBRARY_DIRS},${GFLAGS_LIBRARIES}") message(STATUS "OpenCV:${OpenCV_INCLUDE_DIRS},${OpenCV_LIBRARY_DIRS},${OpenCV_LIBRARIES}") message(STATUS "DeepStream: ${DeepStream_INCLUDE_DIRS}, ${DeepStream_LIBRARY_DIRS}") include_directories( ${PROJECT_SOURCE_DIR}/inc ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ${GLIB_INCLUDE_DIRS} ${GFLAGS_INCLUDE_DIRS} ${JSONCPP_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ${spdlog_INCLUDE_DIRS} ${DeepStream_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ${GLIB_LIBRARY_DIRS} ${GFLAGS_LIBRARY_DIRS} ${JSONCPP_LIBRARY_DIRS} ${OpenCV_LIBRARY_DIRS} ${spdlog_LIBRARY_DIRS} ${DeepStream_LIBRARY_DIRS} ) # Config Logger if(NOT DEFINED LOG_LEVEL) message(STATUS "Not define log print level, default is 'info'") set(LOG_LEVEL "info") endif() add_definitions(-DLOG_LEVEL="${LOG_LEVEL}") message(STATUS "log level: ${LOG_LEVEL}") option(DUMP_LOG "Dump log into a file." OFF) option(MULTI_LOG "Dump log and stdout." OFF) if(DUMP_LOG OR MULTI_LOG) if(NOT DEFINED LOG_PATH) message(STATUS "Not define log path, use default") set(LOG_PATH "./log") message(STATUS "log path: ${LOG_PATH}") endif() if(NOT DEFINED LOG_FILE_PREFIX) message(STATUS "Not define log name prefix, use default") set(LOG_FILE_PREFIX ${PROJECT_NAME}) message(STATUS "log file prefix: ${LOG_FILE_PREFIX}") endif() add_definitions( -DDUMP_LOG -DLOG_PATH="${LOG_PATH}" -DLOG_FILE_PREFIX="${LOG_FILE_PREFIX}" ) if(MULTI_LOG) message(STATUS "Multi log set.") add_definitions(-DMULTI_LOG) endif() endif() # End Config Logger add_executable(${PROJECT_NAME} src/VideoPipeline.cpp src/main.cpp ) target_link_libraries(${PROJECT_NAME} ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ${JSONCPP_LIBRARIES} ${OpenCV_LIBRARIES} nvbufsurface nvdsgst_meta nvds_meta nvds_utils ) ================================================ FILE: ai_integration/deepstream/doc/video-pipeline.dot ================================================ digraph pipeline { rankdir=LR; fontname="sans"; fontsize="10"; labelloc=t; nodesep=.1; ranksep=.2; label="\nvideo-pipeline\n[>]"; node [style="filled,rounded", shape=box, fontsize="9", fontname="sans", margin="0.0,0.0"]; edge [labelfontsize="6", fontsize="9", fontname="monospace"]; legend [ pos="0,0!", margin="0.05,0.05", style="filled", label="Legend\lElement-States: [~] void-pending, [0] null, [-] ready, [=] paused, [>] playing\lPad-Activation: [-] none, [>] push, [<] pull\lPad-Flags: [b]locked, [f]lushing, [b]locking, [E]OS; upper-case is set\lPad-Task: [T] has started task, [t] has paused task\l", ]; subgraph cluster_appsink_0x55d4d81b3c80 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstAppSink\nappsink\n[>]\nparent=(GstPipeline) video-pipeline\nlast-sample=((GstSample*) 0x7fe6c4124260)\neos=FALSE\nemit-signals=TRUE"; subgraph cluster_appsink_0x55d4d81b3c80_sink { label=""; style="invis"; appsink_0x55d4d81b3c80_sink_0x55d4d81b4820 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#aaaaff"; } subgraph cluster_capfilter1_0x55d4d77c65b0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapfilter1\n[>]\nparent=(GstPipeline) video-pipeline\ncaps=video/x-raw(memory:NVMM), format=(string)RGBA"; subgraph cluster_capfilter1_0x55d4d77c65b0_sink { label=""; style="invis"; capfilter1_0x55d4d77c65b0_sink_0x55d4d81b4380 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capfilter1_0x55d4d77c65b0_src { label=""; style="invis"; capfilter1_0x55d4d77c65b0_src_0x55d4d81b45d0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capfilter1_0x55d4d77c65b0_sink_0x55d4d81b4380 -> capfilter1_0x55d4d77c65b0_src_0x55d4d81b45d0 [style="invis"]; fillcolor="#aaffaa"; } capfilter1_0x55d4d77c65b0_src_0x55d4d81b45d0 -> appsink_0x55d4d81b3c80_sink_0x55d4d81b4820 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_videocvt1_0x55d4d81b1df0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="Gstnvvideoconvert\nvideocvt1\n[>]\nparent=(GstPipeline) video-pipeline\nsrc-crop=\"0:0:0:0\"\ndest-crop=\"0:0:0:0\"\nnvbuf-memory-type=nvbuf-mem-cuda-unified"; subgraph cluster_videocvt1_0x55d4d81b1df0_sink { label=""; style="invis"; videocvt1_0x55d4d81b1df0_sink_0x55d4d76e9d00 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_videocvt1_0x55d4d81b1df0_src { label=""; style="invis"; videocvt1_0x55d4d81b1df0_src_0x55d4d81b4130 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } videocvt1_0x55d4d81b1df0_sink_0x55d4d76e9d00 -> videocvt1_0x55d4d81b1df0_src_0x55d4d81b4130 [style="invis"]; fillcolor="#aaffaa"; } videocvt1_0x55d4d81b1df0_src_0x55d4d81b4130 -> capfilter1_0x55d4d77c65b0_sink_0x55d4d81b4380 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_queue1_0x55d4d76ec390 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQueue\nqueue1\n[>]\nparent=(GstPipeline) video-pipeline\ncurrent-level-buffers=4\ncurrent-level-bytes=256\ncurrent-level-time=133200000"; subgraph cluster_queue1_0x55d4d76ec390_sink { label=""; style="invis"; queue1_0x55d4d76ec390_sink_0x55d4d76e9860 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_queue1_0x55d4d76ec390_src { label=""; style="invis"; queue1_0x55d4d76ec390_src_0x55d4d76e9ab0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } queue1_0x55d4d76ec390_sink_0x55d4d76e9860 -> queue1_0x55d4d76ec390_src_0x55d4d76e9ab0 [style="invis"]; fillcolor="#aaffaa"; } queue1_0x55d4d76ec390_src_0x55d4d76e9ab0 -> videocvt1_0x55d4d81b1df0_sink_0x55d4d76e9d00 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_display_0x55d4d81ac3a0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstEglGlesSink\ndisplay\n[>]\nparent=(GstPipeline) video-pipeline\nmax-lateness=5000000\nqos=TRUE\nlast-sample=((GstSample*) 0x7fe6c4124340)\nprocessing-deadline=15000000\nwindow-x=0\nwindow-y=0\nwindow-width=1920\nwindow-height=1080"; subgraph cluster_display_0x55d4d81ac3a0_sink { label=""; style="invis"; display_0x55d4d81ac3a0_sink_0x55d4d76e9610 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#aaaaff"; } subgraph cluster_overlay_0x55d4d80f3c20 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstNvDsOsd\noverlay\n[>]\nparent=(GstPipeline) video-pipeline\nclock-font=NULL\nclock-font-size=0\nclock-color=0\nhw-blend-color-attr=\"3,1.000000,1.000000,0.000000,0.300000:\"\ndisplay-mask=FALSE"; subgraph cluster_overlay_0x55d4d80f3c20_sink { label=""; style="invis"; overlay_0x55d4d80f3c20_sink_0x55d4d76e9170 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_overlay_0x55d4d80f3c20_src { label=""; style="invis"; overlay_0x55d4d80f3c20_src_0x55d4d76e93c0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } overlay_0x55d4d80f3c20_sink_0x55d4d76e9170 -> overlay_0x55d4d80f3c20_src_0x55d4d76e93c0 [style="invis"]; fillcolor="#aaffaa"; } overlay_0x55d4d80f3c20_src_0x55d4d76e93c0 -> display_0x55d4d81ac3a0_sink_0x55d4d76e9610 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_capfilter0_0x55d4d77c6270 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapfilter0\n[>]\nparent=(GstPipeline) video-pipeline\ncaps=video/x-raw(memory:NVMM), format=(string)RGBA"; subgraph cluster_capfilter0_0x55d4d77c6270_sink { label=""; style="invis"; capfilter0_0x55d4d77c6270_sink_0x55d4d76e8cd0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capfilter0_0x55d4d77c6270_src { label=""; style="invis"; capfilter0_0x55d4d77c6270_src_0x55d4d76e8f20 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capfilter0_0x55d4d77c6270_sink_0x55d4d76e8cd0 -> capfilter0_0x55d4d77c6270_src_0x55d4d76e8f20 [style="invis"]; fillcolor="#aaffaa"; } capfilter0_0x55d4d77c6270_src_0x55d4d76e8f20 -> overlay_0x55d4d80f3c20_sink_0x55d4d76e9170 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_videocvt0_0x55d4d777e980 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="Gstnvvideoconvert\nvideocvt0\n[>]\nparent=(GstPipeline) video-pipeline\nsrc-crop=\"0:0:0:0\"\ndest-crop=\"0:0:0:0\"\nnvbuf-memory-type=nvbuf-mem-cuda-unified"; subgraph cluster_videocvt0_0x55d4d777e980_sink { label=""; style="invis"; videocvt0_0x55d4d777e980_sink_0x55d4d76e8830 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_videocvt0_0x55d4d777e980_src { label=""; style="invis"; videocvt0_0x55d4d777e980_src_0x55d4d76e8a80 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } videocvt0_0x55d4d777e980_sink_0x55d4d76e8830 -> videocvt0_0x55d4d777e980_src_0x55d4d76e8a80 [style="invis"]; fillcolor="#aaffaa"; } videocvt0_0x55d4d777e980_src_0x55d4d76e8a80 -> capfilter0_0x55d4d77c6270_sink_0x55d4d76e8cd0 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_queue0_0x55d4d76ec090 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQueue\nqueue0\n[>]\nparent=(GstPipeline) video-pipeline\ncurrent-level-buffers=4\ncurrent-level-bytes=256\ncurrent-level-time=133200000"; subgraph cluster_queue0_0x55d4d76ec090_sink { label=""; style="invis"; queue0_0x55d4d76ec090_sink_0x55d4d76e8390 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_queue0_0x55d4d76ec090_src { label=""; style="invis"; queue0_0x55d4d76ec090_src_0x55d4d76e85e0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } queue0_0x55d4d76ec090_sink_0x55d4d76e8390 -> queue0_0x55d4d76ec090_src_0x55d4d76e85e0 [style="invis"]; fillcolor="#aaffaa"; } queue0_0x55d4d76ec090_src_0x55d4d76e85e0 -> videocvt0_0x55d4d777e980_sink_0x55d4d76e8830 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_tee0_0x55d4d76e6000 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstTee\ntee0\n[>]\nparent=(GstPipeline) video-pipeline\nnum-src-pads=2"; subgraph cluster_tee0_0x55d4d76e6000_sink { label=""; style="invis"; tee0_0x55d4d76e6000_sink_0x55d4d76e8140 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_tee0_0x55d4d76e6000_src { label=""; style="invis"; tee0_0x55d4d76e6000_src_0_0x55d4d76e02e0 [color=black, fillcolor="#ffaaaa", label="src_0\n[>][bfb]", height="0.2", style="filled,dashed"]; tee0_0x55d4d76e6000_src_1_0x55d4d76e0540 [color=black, fillcolor="#ffaaaa", label="src_1\n[>][bfb]", height="0.2", style="filled,dashed"]; } tee0_0x55d4d76e6000_sink_0x55d4d76e8140 -> tee0_0x55d4d76e6000_src_0_0x55d4d76e02e0 [style="invis"]; fillcolor="#aaffaa"; } tee0_0x55d4d76e6000_src_0_0x55d4d76e02e0 -> queue0_0x55d4d76ec090_sink_0x55d4d76e8390 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] tee0_0x55d4d76e6000_src_1_0x55d4d76e0540 -> queue1_0x55d4d76ec390_sink_0x55d4d76e9860 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_uri_0x55d4d76e0060 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstURIDecodeBin\nuri\n[>]\nparent=(GstPipeline) video-pipeline\nuri=\"file:///home/ricardo/workSpace/gstreamer-example/ai_integration/test.mp4\"\nsource=(GstFileSrc) source\ncaps=video/x-raw(ANY); audio/x-raw(ANY); text/x-raw(ANY); subpicture/x-dvd; subpictur…"; subgraph cluster_uri_0x55d4d76e0060_src { label=""; style="invis"; _proxypad4_0x55d4d76e1d10 [color=black, fillcolor="#ffdddd", label="proxypad4\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad4_0x55d4d76e1d10 -> uri_0x55d4d76e0060_src_0_0x55d4d8fdcaf0 [style=dashed, minlen=0] uri_0x55d4d76e0060_src_0_0x55d4d8fdcaf0 [color=black, fillcolor="#ffdddd", label="src_0\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad5_0x7fe6c832e130 [color=black, fillcolor="#ffdddd", label="proxypad5\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad5_0x7fe6c832e130 -> uri_0x55d4d76e0060_src_1_0x55d4d8fdcd70 [style=dashed, minlen=0] uri_0x55d4d76e0060_src_1_0x55d4d8fdcd70 [color=black, fillcolor="#ffdddd", label="src_1\n[>][bfb]", height="0.2", style="filled,dotted"]; } fillcolor="#ffffff"; subgraph cluster_decodebin0_0x55d4d8fda090 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstDecodeBin\ndecodebin0\n[>]\nparent=(GstURIDecodeBin) uri\ncaps=video/x-raw(ANY); audio/x-raw(ANY); text/x-raw(ANY); subpicture/x-dvd; subpictur…"; subgraph cluster_decodebin0_0x55d4d8fda090_sink { label=""; style="invis"; _proxypad0_0x55d4d76e07b0 [color=black, fillcolor="#ddddff", label="proxypad0\n[<][bfb]", height="0.2", style="filled,solid"]; decodebin0_0x55d4d8fda090_sink_0x55d4d8fdc0f0 -> _proxypad0_0x55d4d76e07b0 [style=dashed, minlen=0] decodebin0_0x55d4d8fda090_sink_0x55d4d8fdc0f0 [color=black, fillcolor="#ddddff", label="sink\n[<][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_decodebin0_0x55d4d8fda090_src { label=""; style="invis"; _proxypad2_0x55d4d76e0a10 [color=black, fillcolor="#ffdddd", label="proxypad2\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad2_0x55d4d76e0a10 -> decodebin0_0x55d4d8fda090_src_0_0x7fe6d00320a0 [style=dashed, minlen=0] decodebin0_0x55d4d8fda090_src_0_0x7fe6d00320a0 [color=black, fillcolor="#ffdddd", label="src_0\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad3_0x55d4d76e1390 [color=black, fillcolor="#ffdddd", label="proxypad3\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad3_0x55d4d76e1390 -> decodebin0_0x55d4d8fda090_src_1_0x7fe6d0032b20 [style=dashed, minlen=0] decodebin0_0x55d4d8fda090_src_1_0x7fe6d0032b20 [color=black, fillcolor="#ffdddd", label="src_1\n[>][bfb]", height="0.2", style="filled,dotted"]; } decodebin0_0x55d4d8fda090_sink_0x55d4d8fdc0f0 -> decodebin0_0x55d4d8fda090_src_0_0x7fe6d00320a0 [style="invis"]; fillcolor="#ffffff"; subgraph cluster_nvv4l2decoder0_0x7fe6c8018ee0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="nvv4l2decoder\nnvv4l2decoder0\n[>]\nparent=(GstDecodeBin) decodebin0\ndevice=\"/dev/nvidia0\"\ndevice-name=\"\"\ndevice-fd=31\ndrop-frame-interval=0\nnum-extra-surfaces=0"; subgraph cluster_nvv4l2decoder0_0x7fe6c8018ee0_sink { label=""; style="invis"; nvv4l2decoder0_0x7fe6c8018ee0_sink_0x7fe6c4132410 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_nvv4l2decoder0_0x7fe6c8018ee0_src { label=""; style="invis"; nvv4l2decoder0_0x7fe6c8018ee0_src_0x7fe6c4132660 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } nvv4l2decoder0_0x7fe6c8018ee0_sink_0x7fe6c4132410 -> nvv4l2decoder0_0x7fe6c8018ee0_src_0x7fe6c4132660 [style="invis"]; fillcolor="#aaffaa"; } nvv4l2decoder0_0x7fe6c8018ee0_src_0x7fe6c4132660 -> _proxypad2_0x55d4d76e0a10 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_avdec_aac0_0x7fe6c41314d0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="avdec_aac\navdec_aac0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_avdec_aac0_0x7fe6c41314d0_sink { label=""; style="invis"; avdec_aac0_0x7fe6c41314d0_sink_0x7fe6c400b8f0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_avdec_aac0_0x7fe6c41314d0_src { label=""; style="invis"; avdec_aac0_0x7fe6c41314d0_src_0x7fe6c400bb40 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } avdec_aac0_0x7fe6c41314d0_sink_0x7fe6c400b8f0 -> avdec_aac0_0x7fe6c41314d0_src_0x7fe6c400bb40 [style="invis"]; fillcolor="#aaffaa"; } avdec_aac0_0x7fe6c41314d0_src_0x7fe6c400bb40 -> _proxypad3_0x55d4d76e1390 [label="audio/x-raw\l format: F32LE\l layout: non-interleaved\l rate: 48000\l channels: 2\l channel-mask: 0x0000000000000003\l"] subgraph cluster_aacparse0_0x7fe6c40900f0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstAacParse\naacparse0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_aacparse0_0x7fe6c40900f0_sink { label=""; style="invis"; aacparse0_0x7fe6c40900f0_sink_0x7fe6c400b450 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_aacparse0_0x7fe6c40900f0_src { label=""; style="invis"; aacparse0_0x7fe6c40900f0_src_0x7fe6c400b6a0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } aacparse0_0x7fe6c40900f0_sink_0x7fe6c400b450 -> aacparse0_0x7fe6c40900f0_src_0x7fe6c400b6a0 [style="invis"]; fillcolor="#aaffaa"; } aacparse0_0x7fe6c40900f0_src_0x7fe6c400b6a0 -> avdec_aac0_0x7fe6c41314d0_sink_0x7fe6c400b8f0 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_capsfilter0_0x55d4d77c6f70 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapsfilter0\n[>]\nparent=(GstDecodeBin) decodebin0\ncaps=video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, parsed=(b…"; subgraph cluster_capsfilter0_0x55d4d77c6f70_sink { label=""; style="invis"; capsfilter0_0x55d4d77c6f70_sink_0x7fe6c400a8c0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capsfilter0_0x55d4d77c6f70_src { label=""; style="invis"; capsfilter0_0x55d4d77c6f70_src_0x7fe6c400ab10 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capsfilter0_0x55d4d77c6f70_sink_0x7fe6c400a8c0 -> capsfilter0_0x55d4d77c6f70_src_0x7fe6c400ab10 [style="invis"]; fillcolor="#aaffaa"; } capsfilter0_0x55d4d77c6f70_src_0x7fe6c400ab10 -> nvv4l2decoder0_0x7fe6c8018ee0_sink_0x7fe6c4132410 [label="video/x-h264\l stream-format: byte-stream\l alignment: au\l level: 4.2\l profile: high\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l interlace-mode: progressive\l chroma-format: 4:2:0\l bit-depth-luma: 8\l bit-depth-chroma: 8\l parsed: true\l"] subgraph cluster_h264parse0_0x7fe6c40108a0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstH264Parse\nh264parse0\n[>]\nparent=(GstDecodeBin) decodebin0\nconfig-interval=-1"; subgraph cluster_h264parse0_0x7fe6c40108a0_sink { label=""; style="invis"; h264parse0_0x7fe6c40108a0_sink_0x7fe6c400a420 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_h264parse0_0x7fe6c40108a0_src { label=""; style="invis"; h264parse0_0x7fe6c40108a0_src_0x7fe6c400a670 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } h264parse0_0x7fe6c40108a0_sink_0x7fe6c400a420 -> h264parse0_0x7fe6c40108a0_src_0x7fe6c400a670 [style="invis"]; fillcolor="#aaffaa"; } h264parse0_0x7fe6c40108a0_src_0x7fe6c400a670 -> capsfilter0_0x55d4d77c6f70_sink_0x7fe6c400a8c0 [label="video/x-h264\l stream-format: byte-stream\l alignment: au\l level: 4.2\l profile: high\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l interlace-mode: progressive\l chroma-format: 4:2:0\l bit-depth-luma: 8\l bit-depth-chroma: 8\l parsed: true\l"] subgraph cluster_multiqueue0_0x7fe6c400d060 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstMultiQueue\nmultiqueue0\n[>]\nparent=(GstDecodeBin) decodebin0\nmax-size-bytes=2097152\nmax-size-time=0"; subgraph cluster_multiqueue0_0x7fe6c400d060_sink { label=""; style="invis"; multiqueue0_0x7fe6c400d060_sink_0_0x55d4d81b5cf0 [color=black, fillcolor="#aaaaff", label="sink_0\n[>][bfb]", height="0.2", style="filled,dashed"]; multiqueue0_0x7fe6c400d060_sink_1_0x7fe6c400afb0 [color=black, fillcolor="#aaaaff", label="sink_1\n[>][bfb]", height="0.2", style="filled,dashed"]; } subgraph cluster_multiqueue0_0x7fe6c400d060_src { label=""; style="invis"; multiqueue0_0x7fe6c400d060_src_0_0x7fe6c400a1d0 [color=black, fillcolor="#ffaaaa", label="src_0\n[>][bfb][T]", height="0.2", style="filled,dotted"]; multiqueue0_0x7fe6c400d060_src_1_0x7fe6c400b200 [color=black, fillcolor="#ffaaaa", label="src_1\n[>][bfb][T]", height="0.2", style="filled,dotted"]; } multiqueue0_0x7fe6c400d060_sink_0_0x55d4d81b5cf0 -> multiqueue0_0x7fe6c400d060_src_0_0x7fe6c400a1d0 [style="invis"]; fillcolor="#aaffaa"; } multiqueue0_0x7fe6c400d060_src_0_0x7fe6c400a1d0 -> h264parse0_0x7fe6c40108a0_sink_0x7fe6c400a420 [label="video/x-h264\l stream-format: avc\l alignment: au\l level: 4.2\l profile: high\l codec_data: 0164002affe10018676400...\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l"] multiqueue0_0x7fe6c400d060_src_1_0x7fe6c400b200 -> aacparse0_0x7fe6c40900f0_sink_0x7fe6c400b450 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_qtdemux0_0x7fe6d007e140 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQTDemux\nqtdemux0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_qtdemux0_0x7fe6d007e140_sink { label=""; style="invis"; qtdemux0_0x7fe6d007e140_sink_0x55d4d81b5160 [color=black, fillcolor="#aaaaff", label="sink\n[<][bfb][T]", height="0.2", style="filled,solid"]; } subgraph cluster_qtdemux0_0x7fe6d007e140_src { label=""; style="invis"; qtdemux0_0x7fe6d007e140_video_0_0x55d4d81b5aa0 [color=black, fillcolor="#ffaaaa", label="video_0\n[>][bfb]", height="0.2", style="filled,dotted"]; qtdemux0_0x7fe6d007e140_audio_0_0x7fe6c400ad60 [color=black, fillcolor="#ffaaaa", label="audio_0\n[>][bfb]", height="0.2", style="filled,dotted"]; } qtdemux0_0x7fe6d007e140_sink_0x55d4d81b5160 -> qtdemux0_0x7fe6d007e140_video_0_0x55d4d81b5aa0 [style="invis"]; fillcolor="#aaffaa"; } qtdemux0_0x7fe6d007e140_video_0_0x55d4d81b5aa0 -> multiqueue0_0x7fe6c400d060_sink_0_0x55d4d81b5cf0 [label="video/x-h264\l stream-format: avc\l alignment: au\l level: 4.2\l profile: high\l codec_data: 0164002affe10018676400...\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l"] qtdemux0_0x7fe6d007e140_audio_0_0x7fe6c400ad60 -> multiqueue0_0x7fe6c400d060_sink_1_0x7fe6c400afb0 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_typefind_0x55d4d90810b0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstTypeFindElement\ntypefind\n[>]\nparent=(GstDecodeBin) decodebin0\ncaps=video/quicktime, variant=(string)iso"; subgraph cluster_typefind_0x55d4d90810b0_sink { label=""; style="invis"; typefind_0x55d4d90810b0_sink_0x55d4d81b4cc0 [color=black, fillcolor="#aaaaff", label="sink\n[<][bfb][t]", height="0.2", style="filled,solid"]; } subgraph cluster_typefind_0x55d4d90810b0_src { label=""; style="invis"; typefind_0x55d4d90810b0_src_0x55d4d81b4f10 [color=black, fillcolor="#ffaaaa", label="src\n[<][bfb]", height="0.2", style="filled,solid"]; } typefind_0x55d4d90810b0_sink_0x55d4d81b4cc0 -> typefind_0x55d4d90810b0_src_0x55d4d81b4f10 [style="invis"]; fillcolor="#aaffaa"; } _proxypad0_0x55d4d76e07b0 -> typefind_0x55d4d90810b0_sink_0x55d4d81b4cc0 [label="ANY"] typefind_0x55d4d90810b0_src_0x55d4d81b4f10 -> qtdemux0_0x7fe6d007e140_sink_0x55d4d81b5160 [labeldistance="10", labelangle="0", label=" ", taillabel="ANY", headlabel="video/quicktime\lvideo/mj2\laudio/x-m4a\lapplication/x-3gp\l"] } decodebin0_0x55d4d8fda090_src_0_0x7fe6d00320a0 -> _proxypad4_0x55d4d76e1d10 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] decodebin0_0x55d4d8fda090_src_1_0x7fe6d0032b20 -> _proxypad5_0x7fe6c832e130 [label="audio/x-raw\l format: F32LE\l layout: non-interleaved\l rate: 48000\l channels: 2\l channel-mask: 0x0000000000000003\l"] subgraph cluster_source_0x55d4d86dc3e0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstFileSrc\nsource\n[>]\nparent=(GstURIDecodeBin) uri\nlocation=\"/home/ricardo/workSpace/gstreamer-example/ai_integration/test.mp4\""; subgraph cluster_source_0x55d4d86dc3e0_src { label=""; style="invis"; source_0x55d4d86dc3e0_src_0x55d4d81b4a70 [color=black, fillcolor="#ffaaaa", label="src\n[<][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#ffaaaa"; } source_0x55d4d86dc3e0_src_0x55d4d81b4a70 -> decodebin0_0x55d4d8fda090_sink_0x55d4d8fdc0f0 [label="ANY"] } uri_0x55d4d76e0060_src_0_0x55d4d8fdcaf0 -> tee0_0x55d4d76e6000_sink_0x55d4d76e8140 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] } ================================================ FILE: ai_integration/deepstream/inc/Common.h ================================================ /* * @Description: Common Utils. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:24:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2022-07-16 13:49:59 */ #pragma once #include #include #include #include #include #include #include #include #include #include "Logger.h" class OSDObject { public: int x, y, width, height; double r, g, b, a; OSDObject(int _x, int _y, int _width, int _height, int _r, int _g, int _b, int _a = 1.0) : x(_x), y(_y), width(_width), height(_height), r(_r), g(_g), b(_b), a(_a) { } }; class GstBufferObject { public: GstBufferObject(GstBuffer* buffer) { if(buffer) { gst_buffer_ref(buffer); m_buffer = buffer; } } ~GstBufferObject() { if (m_buffer) { gst_buffer_unref(m_buffer); m_buffer = nullptr; } } GstBuffer* GetBuffer() { return m_buffer; } GstBuffer* RefBuffer() { if (m_buffer) { gst_buffer_ref(m_buffer); } return m_buffer; } private: GstBuffer* m_buffer; }; class GstSampleObject { public: GstSampleObject(GstSample* sample, uint64_t timestamp) : m_sample (sample), m_timestamp(timestamp), m_buffer (nullptr), m_format (nullptr), m_rows (0), m_cols (0), m_fpsn (0), m_fpsd (0) { } ~GstSampleObject() { if(m_sample) { gst_sample_unref(m_sample); m_sample = nullptr; } } GstSample* GetSample() { return m_sample; } GstSample* RefSample() { return gst_sample_ref(m_sample); } GstBuffer* GetBuffer(int& width, int& height, std::string& format) { if (!m_buffer) { GstCaps* caps = gst_sample_get_caps(m_sample); GstStructure* structure = gst_caps_get_structure(caps, 0); gst_structure_get_int(structure, "width", &m_cols); gst_structure_get_int(structure, "height", &m_rows); gst_structure_get_fraction(structure, "framerate", &m_fpsn, &m_fpsd); m_format = gst_structure_get_string(structure, "format"); m_buffer = gst_sample_get_buffer(m_sample); } width = m_cols; height = m_rows; format = m_format; return m_buffer; } GstBuffer* RefBuffer(int& width, int& height, std::string& format) { return gst_buffer_ref(GetBuffer(width, height, format)); } gint64 GetTimestamp() { return m_timestamp; } private: GstSample* m_sample; GstBuffer* m_buffer; int m_cols; int m_rows; int m_fpsn; int m_fpsd; const char* m_format; uint64_t m_timestamp; }; // callback functions typedef std::function PutFrameFunc; typedef std::function(void*)> GetFrameFunc; typedef std::function >, void*)> PutResultFunc; typedef std::function >(void*)> GetResultFunc; typedef std::function >& results)> ProcResultFunc; ================================================ FILE: ai_integration/deepstream/inc/DoubleBufferCache.h ================================================ /* * @Description: Double Buffer Cache Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-29 08:51:01 * @LastEditors: Ricardo Lu * @LastEditTime: 2022-07-16 13:52:32 */ #pragma once #include #include #include #include /** * @brief Shared-buffer cache manager. */ template class DoubleBufCache { public: /** * @brief: constructor * @Author: Ricardo Lu * @param[in] notify_func When a new buffer is fed, it triggers the function handle. * @return {*} */ DoubleBufCache(std::function notify_func = std::function{nullptr}, std::string debug_info = "") noexcept : debug_info(debug_info), swap_ready(false) { this->notify_func = notify_func; } /** * @brief deconstructor * @Author: Ricardo Lu */ ~DoubleBufCache() noexcept { if (!debug_info.empty() ) { printf("DoubleBufCache %s destroyed.", debug_info.c_str()); } } /** * @brief Put the latest buffer into cache queue to be processed. * Giving up control of previous front buffer. * @Author: Ricardo Lu * @param[in] pending - The latest buffer. */ void feed(std::shared_ptr pending) { if (nullptr == pending.get()) { throw "ERROR: feed an empty buffer to DoubleBufCache"; } swap_mtx.lock(); front_sp = pending; swap_mtx.unlock(); swap_ready = true; if (notify_func) { notify_func(); } return; } /** * @brief Get the front buffer. * @Author: Ricardo Lu * @return Front buffer. * */ std::shared_ptr front() noexcept { return front_sp; } /** * @brief Fetch the shared back buffer. * @Author: Ricardo Lu * @return Back buffer. */ std::shared_ptr fetch() noexcept { if (swap_ready) { swap_mtx.lock(); back_sp = front_sp; swap_mtx.unlock(); swap_ready = false; } return back_sp; } private: //! Notification function will be called, if a new buffer fed. std::function notify_func; //! The buffer cache can be swapped if the flag is equal to true. std::atomic swap_ready; //! Swapping mutex lock for thread safety. std::mutex swap_mtx; //! Front buffer for previous results saving. std::shared_ptr front_sp; //! Back buffer to be fetched. std::shared_ptr back_sp; public: //! Indicate the name of an instantiated object for debug. std::string debug_info; }; ================================================ FILE: ai_integration/deepstream/inc/Logger.h ================================================ /* * @Description: Single Instance logger based on spdlog. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-09-11 20:05:38 * @Last Editor: Ricardo Lu * @LastEditTime: 2022-07-09 08:22:06 */ #pragma once #include #include #include #include #include #include #include "spdlog/spdlog.h" #include "spdlog/async.h" #include "spdlog/sinks/basic_file_sink.h" #include "spdlog/sinks/rotating_file_sink.h" #include "spdlog/sinks/stdout_color_sinks.h" static inline int NowDateToInt() { time_t now; time(&now); tm p; localtime_r(&now, &p); int now_date =(1900 + p.tm_year) * 10000 +(p.tm_mon + 1) * 100 + p.tm_mday; return now_date; } static inline int NowTimeToInt() { time_t now; time(&now); tm p; localtime_r(&now, &p); int now_int = p.tm_hour * 10000 + p.tm_min * 100 + p.tm_sec; return now_int; } static inline spdlog::level::level_enum GetLogLevel(std::string& level) { if (!(level.compare("trace"))) { return spdlog::level::trace; } else if (!(level.compare("debug"))) { return spdlog::level::debug; } else if (!(level.compare("info"))) { return spdlog::level::info; } else if (!(level.compare("warn"))) { return spdlog::level::warn; } else if (!(level.compare("error"))) { return spdlog::level::err; } return spdlog::level::trace; } class XLogger { public: static XLogger* getInstance() { static XLogger xlogger; return &xlogger; } std::shared_ptr getLogger() { return m_logger; } private: XLogger() { try { #ifdef DUMP_LOG int date = NowDateToInt(); int timestamp = NowTimeToInt(); std::stringstream file_logger_name; std::stringstream file_log_full_path; if (access(LOG_PATH, F_OK)) { spdlog::warn("Log diretory not exist, mkdir called"); mkdir(LOG_PATH, S_IRWXU); } file_logger_name << LOG_FILE_PREFIX << "_" << date << "_" << timestamp; file_log_full_path << LOG_PATH << "/" << file_logger_name.str() << ".log"; #ifdef MULTI_LOG auto console_sink = std::make_shared(); auto file_sink = std::make_shared (file_log_full_path.str(), true); spdlog::logger logger("multi_sink", {console_sink, file_sink}); m_logger = std::make_shared(logger); #else // fileout only m_logger = spdlog::basic_logger_mt("file_logger", file_log_full_path.str()); #endif // MULTI_LOG #else // stdout only m_logger = spdlog::stdout_color_mt("console_logger"); #endif // DUMP_LOG m_logger->set_pattern("%Y-%m-%d %H:%M:%S.%f [%^%l%$] [%@] [%!] %v"); std::string log_level(LOG_LEVEL); spdlog::info("Set log level to {}.", log_level); m_logger->set_level(GetLogLevel(log_level)); m_logger->flush_on(GetLogLevel(log_level)); } catch(const spdlog::spdlog_ex& ex) { spdlog::error("XLogger initializetion failed: {}", ex.what()); } } ~XLogger() { spdlog::drop_all(); // must do this } XLogger(const XLogger&) = delete; XLogger& operator=(const XLogger&) = delete; private: std::shared_ptr m_logger; }; // use embedded macro to support file and line number #define LOG_TRACE(...) SPDLOG_LOGGER_CALL(XLogger::getInstance()->getLogger().get(), spdlog::level::trace, __VA_ARGS__) #define LOG_DEBUG(...) SPDLOG_LOGGER_CALL(XLogger::getInstance()->getLogger().get(), spdlog::level::debug, __VA_ARGS__) #define LOG_INFO(...) SPDLOG_LOGGER_CALL(XLogger::getInstance()->getLogger().get(), spdlog::level::info, __VA_ARGS__) #define LOG_WARN(...) SPDLOG_LOGGER_CALL(XLogger::getInstance()->getLogger().get(), spdlog::level::warn, __VA_ARGS__) #define LOG_ERROR(...) SPDLOG_LOGGER_CALL(XLogger::getInstance()->getLogger().get(), spdlog::level::err, __VA_ARGS__) ================================================ FILE: ai_integration/deepstream/inc/VideoPipeline.h ================================================ /* * @Description: Implement of VideoPipeline on DeepStream. * @version: 2.0 * @Author: Ricardo Lu * @Date: 2022-07-15 22:07:29 * @LastEditors: Ricardo Lu * @LastEditTime: 2023-02-06 21:03:47 */ #pragma once # include "Common.h" static int pipeline_id = 0; typedef enum _VideoType { FILE_STREAM = 0, RTSP_STREAM = 1, USB_CAMERE = 2 }VideoType; typedef struct _VideoPipelineConfig { std::string pipeline_id; int input_type { VideoType::FILE_STREAM }; /*------------------uridecodebin------------------*/ std::string src_uri; bool file_loop; int rtsp_latency; int rtp_protocol; /*--------------------v4l2src--------------------*/ std::string src_device; std::string src_format; int src_width; int src_height; int src_framerate_n; int src_framerate_d; /*-------------nveglglessink branch-------------*/ bool enable_hdmi; bool hdmi_sync; int window_x; int window_y; int window_width; int window_height; /*----------------rtmpsink branch---------------*/ // nvviconvert of this branch only convert color space to NV12(default behavior) // bool enable_rtmp; int enc_bitrate; int enc_iframe_interval; std::string rtmp_uri; /*---------------inference branch---------------*/ bool enable_appsink; /*----------------nvvideoconvert----------------*/ int cvt_memory_type; std::string cvt_format; int cvt_width; int cvt_height; std::string crop; }VideoPipelineConfig; class VideoPipeline { public: VideoPipeline (const VideoPipelineConfig& config); ~VideoPipeline (); bool Create (); bool Start (); bool Pause (); bool Resume (); void Destroy (); void SetCallbacks (PutFrameFunc func, void* args); void SetCallbacks (GetResultFunc func, void* args); void SetCallbacks (ProcResultFunc func); private: GstElement* CreateUridecodebin(); GstElement* CreateV4l2src(); public: PutFrameFunc m_putFrameFunc; void* m_putFrameArgs; GetResultFunc m_getResultFunc; void* m_getResultArgs; ProcResultFunc m_procResultFunc; uint64_t m_queue00_src_probe; /* probe for nvvideoconvert sync ans osd process */ uint64_t m_cvt_sink_probe; /* probe for inference rate control */ uint64_t m_cvt_src_probe; /* probe for convert lock sync */ uint64_t m_dec_sink_probe; /* probe for seek */ uint64_t m_prev_accumulated_base; /* PTS offset for seek */ uint64_t m_accumulated_base; /* PTS offset for seek */ VideoPipelineConfig m_config; volatile int m_syncCount; volatile bool m_isExited; GMutex m_syncMuxtex; GCond m_syncCondition; GMutex m_mutex; bool m_dumped; /* dump pipeline to dot */ GstElement* m_pipeline; GstElement* m_source; /* uridecodebin or v4l2src */ GstElement* m_streammuxer; /* nvstreamuxer */ GstElement* m_capfilter0; /* image/jpeg */ GstElement* m_decoder; /* nvv4l2decoder or nvjpegdec */ GstElement* m_tee0; /* display branch & inference branch */ GstElement* m_queue00; /* for display branch */ GstElement* m_fakesink; /* sync stream when disabled display */ GstElement* m_tee1; /* nveglglessink branch & rtmpsink branch */ GstElement* m_queue10; /* for nveglglessink branch */ GstElement* m_nveglglessink; /* nveglglessink */ GstElement* m_queue11; /* for rtmpsink branch */ GstElement* m_nvvideoconvert0; /* convert RGBA(nvjpegdec) to NV12 */ GstElement* m_capfilter1; GstElement* m_encoder; /* nvv4l2h264enc */ GstElement* m_h264parse; /* h264parse */ GstElement* m_flvmux; /* flvmux */ GstElement* m_rtmpsink; /* rtmpsink */ GstElement* m_queue01; /* for inference branch */ GstElement* m_nvvideoconvert1; /* convert NV12(nvv4l2decoder) to RGBA */ GstElement* m_capfilter2; GstElement* m_appsink; /* for AI inference */ }; /* gst-launch-1.0 uridecodebin uri="rtsp://127.0.0.1:554/live/test" ! tee name=tee0 ! queue ! \ tee name=t1 ! queue ! nveglglessink tee1. ! queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! \ nvv4l2h264enc bitrate=4000000 iframeinterval=30 ! flvmux ! rtmpsink location=rtmp://127.0.0.1:1935/live/test \ tee0. ! queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! appsink gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,format=MJPG,width=1920,height=1080,framerate=30/1 ! \ nvjpegdec ! tee name=tee0 ! queue ! tee name=tee1 ! queue ! nveglglessink tee1. ! queue ! nvvideoconvert ! \ video/x-raw(memory:NVMM),format=NV12 ! nvv4l2h264enc bitrate=4000000 iframeinterval=30 ! flvmux ! \ rtmpsink location=rtmp://127.0.0.1:1935/live/test tee0. ! queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! appsink */ ================================================ FILE: ai_integration/deepstream/sp_mp4.json ================================================ { "name":"pipeline0", "input-config":{ "type":1, "stream":{ "uri":"file:///home/ricardo/workSpace/gstreamer-example/ai_integration/test.mp4", "file-loop":false, "rtsp-latency":0, "rtp-protocol":4 }, "usb-camera":{ "device":"/dev/video0", "format":"MJPG", "width":1920, "height":1080, "framerate-n":30, "framerate-d":1 } }, "output-config":{ "display":{ "enable":true, "sync":true, "left":0, "top":0, "width":1920, "height":1080 }, "rtmp":{ "enable":true, "bitrate":100000, "iframeinterval":30, "uri":"rtmp://127.0.0.1:1935/live/test" }, "inference":{ "enable":true, "memory-type":3, "format":"RGBA" } } } ================================================ FILE: ai_integration/deepstream/sp_rtsp.json ================================================ { "name":"pipeline0", "input-config":{ "type":1, "stream":{ "uri":"rtsp://127.0.0.1:554/live/test", "file-loop":false, "rtsp-latency":0, "rtp-protocol":4 }, "usb-camera":{ "device":"/dev/video0", "format":"MJPG", "width":1920, "height":1080, "framerate-n":30, "framerate-d":1 } }, "output-config":{ "display":{ "enable":true, "sync":true, "left":0, "top":0, "width":1920, "height":1080 }, "rtmp":{ "enable":false, "bitrate":100000, "iframeinterval":30, "uri":"rtmp://127.0.0.1:1935/live/test" }, "inference":{ "enable":true, "memory-type":3, "format":"RGBA" } } } ================================================ FILE: ai_integration/deepstream/sp_uc.json ================================================ { "name":"pipeline0", "input-config":{ "type":2, "stream":{ "uri":"rtsp://127.0.0.1:554/live/test", "file-loop":false, "rtsp-latency":0, "rtp-protocol":4 }, "usb-camera":{ "device":"/dev/video0", "format":"MJPG", "width":1920, "height":1080, "framerate-n":30, "framerate-d":1 } }, "output-config":{ "display":{ "enable":true, "sync":true, "left":0, "top":0, "width":1920, "height":1080 }, "rtmp":{ "enable":true, "bitrate":100000, "iframeinterval":30, "uri":"rtmp://127.0.0.1:1935/live/test" }, "inference":{ "enable":true, "memory-type":3, "format":"RGBA" } } } ================================================ FILE: ai_integration/deepstream/src/VideoPipeline.cpp ================================================ /* * @Description: Implement of VideoPipeline on DeepStream. * @version: 2.0 * @Author: Ricardo Lu * @Date: 2022-07-15 22:07:19 * @LastEditors: Ricardo Lu * @LastEditTime: 2023-02-06 21:04:48 */ #include "VideoPipeline.h" static GstPadProbeReturn cb_sync_before_buffer_probe( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { // LOG_INFO("cb_sync_before_buffer_probe called"); VideoPipeline* vp = static_cast(user_data); GstBuffer* buffer = (GstBuffer*) info->data; return GST_PAD_PROBE_OK; } static GstPadProbeReturn cb_sync_after_buffer_probe( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { // LOG_INFO("cb_sync_after_buffer_probe called"); VideoPipeline* vp = static_cast(user_data); GstBuffer* buffer = (GstBuffer*) info->data; // sync if (info->type & GST_PAD_PROBE_TYPE_BUFFER) { g_mutex_lock(&vp->m_syncMuxtex); g_atomic_int_inc(&vp->m_syncCount); g_cond_signal(&vp->m_syncCondition); g_mutex_unlock(&vp->m_syncMuxtex); } return GST_PAD_PROBE_OK; } static GstPadProbeReturn cb_queue0_probe( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { // LOG_INFO("cb_queue0_probe called"); VideoPipeline* vp = static_cast(user_data); GstBuffer* buffer = (GstBuffer*) info->data; // sync // if (info->type & GST_PAD_PROBE_TYPE_BUFFER && !vp->m_isExited) { // g_mutex_lock(&vp->m_syncMuxtex); // while(g_atomic_int_get(&vp->m_syncCount) <= 0) // g_cond_wait(&vp->m_syncCondition, &vp->m_syncMuxtex); // if (!g_atomic_int_dec_and_test(&vp->m_syncCount)) { // //LOG_INFO("m_syncCount:{}/{}", vp->m_syncCount, // // pipeline_id); // } // g_mutex_unlock(&vp->m_syncMuxtex); // } // osd the result if (vp->m_getResultFunc) { const std::shared_ptr > results = vp->m_getResultFunc(vp->m_getResultArgs); // to-do: construct nvdsosd metadata if (vp->m_procResultFunc) { vp->m_procResultFunc(buffer, results); } } // LOG_INFO("cb_queue0_probe exited"); return GST_PAD_PROBE_OK; } static GstFlowReturn cb_appsink_new_sample( GstElement* appsink, gpointer user_data) { // LOG_INFO("cb_appsink_new_sample called"); VideoPipeline* vp = static_cast(user_data); GstSample* sample = nullptr; if (!vp->m_dumped) { GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(vp->m_pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "video-pipeline"); vp->m_dumped = true; } g_signal_emit_by_name(appsink, "pull-sample", &sample); if (!sample) { return GST_FLOW_OK; } if (vp->m_putFrameFunc) { vp->m_putFrameFunc(sample, vp->m_putFrameArgs); } else { gst_sample_unref(sample); } return GST_FLOW_OK; } static gboolean cb_seek_decoded_file(gpointer user_data) { VideoPipeline* vp = static_cast(user_data); LOG_INFO("============================================"); LOG_INFO("cb_seek_decoded_file called({})", pipeline_id); LOG_INFO("============================================"); gst_element_set_state(vp->m_pipeline, GST_STATE_PAUSED); if (!gst_element_seek(vp->m_pipeline, 1.0, GST_FORMAT_TIME, (GstSeekFlags)(GST_SEEK_FLAG_KEY_UNIT | GST_SEEK_FLAG_FLUSH), GST_SEEK_TYPE_SET, 0, GST_SEEK_TYPE_NONE, GST_CLOCK_TIME_NONE)) { LOG_WARN("Failed to seed the source file in pipeline"); } gst_element_set_state(vp->m_pipeline, GST_STATE_PLAYING); return false; } static GstPadProbeReturn cb_reset_stream_probe( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { VideoPipeline* vp = static_cast(user_data); if (info->type & GST_PAD_PROBE_TYPE_BUFFER) { GST_BUFFER_PTS(GST_BUFFER(info->data)) += vp->m_prev_accumulated_base; } if (info->type & GST_PAD_PROBE_TYPE_EVENT_BOTH) { GstEvent* event = GST_EVENT(info->data); if (GST_EVENT_TYPE(event) == GST_EVENT_SEGMENT) { GstSegment *segment; gst_event_parse_segment(event, (const GstSegment **) &segment); segment->base = vp->m_accumulated_base; vp->m_prev_accumulated_base = vp->m_accumulated_base; vp->m_accumulated_base += segment->stop; } else if (GST_EVENT_TYPE(event) == GST_EVENT_EOS) { g_timeout_add(1, cb_seek_decoded_file, vp); } switch(GST_EVENT_TYPE(event)) { case GST_EVENT_EOS: case GST_EVENT_QOS: case GST_EVENT_SEGMENT: case GST_EVENT_FLUSH_START: case GST_EVENT_FLUSH_STOP: return GST_PAD_PROBE_DROP; default: break; } } return GST_PAD_PROBE_OK; } static void cb_decodebin_child_added(GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { VideoPipeline* vp = static_cast(user_data); LOG_INFO("cb_decodebin_child_added called({},'{}' added)", pipeline_id, name); if (g_strrstr(name, "nvv4l2decoder") == name) { g_object_set(object, "cudadec-memtype", 2, nullptr); if (g_strstr_len(vp->m_config.src_uri.c_str(), -1, "file:/") == vp->m_config.src_uri.c_str() && vp->m_config.file_loop) { GstPad* gst_pad = gst_element_get_static_pad(GST_ELEMENT(object), "sink"); vp->m_dec_sink_probe = gst_pad_add_probe(gst_pad, (GstPadProbeType)( GST_PAD_PROBE_TYPE_EVENT_BOTH | GST_PAD_PROBE_TYPE_EVENT_FLUSH | GST_PAD_PROBE_TYPE_BUFFER), cb_reset_stream_probe, static_cast(vp), nullptr); gst_object_unref(gst_pad); vp->m_decoder = GST_ELEMENT(object); gst_object_ref(object); } else if (g_strstr_len(vp->m_config.src_uri.c_str(), -1, "rtsp:/") == vp->m_config.src_uri.c_str()) { vp->m_decoder = GST_ELEMENT(object); gst_object_ref(object); } } else if ((g_strrstr(name, "h264parse") == name) || (g_strrstr(name, "h265parse") == name)) { LOG_INFO("set config-interval of {} to {}", name, -1); g_object_set(object, "config-interval", -1, nullptr); } done: return; } static void cb_uridecodebin_source_setup(GstElement* object, GstElement* source, gpointer user_data) { LOG_INFO("cb_uridecodebin_source_setup called"); VideoPipeline* vp = static_cast(user_data); if (g_object_class_find_property(G_OBJECT_GET_CLASS(source), "latency")) { LOG_INFO("cb_uridecodebin_source_setup set {} latency", vp->m_config.rtsp_latency); g_object_set(G_OBJECT(source), "latency", vp->m_config.rtsp_latency, nullptr); } if (g_object_class_find_property(G_OBJECT_GET_CLASS(source), "protocols")) { LOG_INFO("set protocols of source to {}", vp->m_config.rtp_protocol); g_object_set(G_OBJECT(source), "protocols", vp->m_config.rtp_protocol, nullptr); } } static void cb_uridecodebin_pad_added(GstElement* decodebin, GstPad* pad, gpointer user_data) { VideoPipeline* vp = static_cast(user_data); GstPad* sinkpad = nullptr; GstCaps* caps = gst_pad_query_caps(pad, nullptr); const GstStructure* str = gst_caps_get_structure(caps, 0); const gchar* name = gst_structure_get_name(str); LOG_INFO("cb_uridecodebin_pad_added called {}", name); LOG_INFO("structure:{}", gst_structure_to_string(str)); if (g_str_has_prefix (name, "video/x-raw")) { if (vp->m_config.enable_hdmi || vp->m_config.enable_rtmp || vp->m_config.enable_appsink) { sinkpad = gst_element_get_static_pad(vp->m_tee0, "sink"); } else { sinkpad = gst_element_get_static_pad(vp->m_fakesink, "sink"); } if (sinkpad && gst_pad_link(pad, sinkpad) == GST_PAD_LINK_OK) { LOG_INFO("Success to link uridecodebin into pipeline"); } else { LOG_ERROR("Failed to link uridecodebin to pipeline"); } if (sinkpad) { gst_object_unref(sinkpad); } } gst_caps_unref(caps); } static void cb_uridecodebin_child_added(GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { VideoPipeline* vp = static_cast(user_data); LOG_INFO("cb_uridecodebin_child_added called({},'{}' added)", pipeline_id, name); if (g_strrstr(name, "decodebin") == name) { g_signal_connect(G_OBJECT(object), "child-added", G_CALLBACK(cb_decodebin_child_added), vp); } done: return; } VideoPipeline::VideoPipeline(const VideoPipelineConfig& config) { m_config = config; m_syncCount = 0; m_isExited = false; m_queue00_src_probe = -1; m_cvt_sink_probe = -1; m_cvt_src_probe = -1; m_dec_sink_probe = -1; m_prev_accumulated_base = 0; m_accumulated_base = 0; m_dumped = false; m_putFrameFunc = nullptr; m_putFrameArgs = nullptr; m_getResultFunc = nullptr; m_getResultArgs = nullptr; m_procResultFunc = nullptr; g_mutex_init(&m_syncMuxtex); g_cond_init(&m_syncCondition); g_mutex_init(&m_mutex); } VideoPipeline::~VideoPipeline() { Destroy(); } GstElement* VideoPipeline::CreateUridecodebin() { if (!(m_source = gst_element_factory_make("uridecodebin", "uridecodebin0"))) { LOG_ERROR("Failed to create element uridecodebin named uridecodebin0"); return nullptr; } g_object_set (G_OBJECT(m_source), "uri", m_config.src_uri.c_str(), nullptr); LOG_INFO("Set uri of uridecodebin to {}", m_config.src_uri); g_signal_connect(G_OBJECT(m_source), "source-setup", G_CALLBACK( cb_uridecodebin_source_setup), this); g_signal_connect(G_OBJECT(m_source), "pad-added", G_CALLBACK( cb_uridecodebin_pad_added), this); g_signal_connect(G_OBJECT(m_source), "child-added", G_CALLBACK( cb_uridecodebin_child_added), this); gst_bin_add_many(GST_BIN(m_pipeline), m_source, nullptr); return m_source; } GstElement* VideoPipeline::CreateV4l2src() { if (!(m_source = gst_element_factory_make("v4l2src", "v4l2src0"))) { LOG_ERROR("Failed to create element v4l2src named v4l2src0"); return nullptr; } g_object_set (G_OBJECT (m_source), "device", m_config.src_device.c_str(), nullptr); gst_bin_add_many (GST_BIN (m_pipeline), m_source, nullptr); GstCaps* caps = gst_caps_new_simple ("image/jpeg", "width", G_TYPE_INT, m_config.src_width, "height", G_TYPE_INT, m_config.src_height, "framerate", GST_TYPE_FRACTION, m_config.src_framerate_n, m_config.src_framerate_d, "format", G_TYPE_STRING, m_config.src_format.c_str(), nullptr); if (!(m_capfilter0 = gst_element_factory_make ("capsfilter", "capfilter0"))) { LOG_ERROR("Failed to create element capsfilter named capfilter0"); return nullptr; } g_object_set(G_OBJECT(m_capfilter0), "caps", caps, nullptr); gst_caps_unref(caps); gst_bin_add_many (GST_BIN (m_pipeline), m_capfilter0, nullptr); if (!(m_decoder = gst_element_factory_make("jpegdec", "jpegdec0"))) { LOG_ERROR("Failed to create element jpegdec named jpegdec0"); return nullptr; } gst_bin_add_many(GST_BIN(m_pipeline), m_decoder, nullptr); if (!gst_element_link_many(m_source, m_capfilter0, m_decoder, nullptr)) { LOG_ERROR("Failed to link v4l2src0->capfilter0->nvjpegdec0"); return nullptr; } return m_decoder; } bool VideoPipeline::Create() { GstCaps* cvt_caps; GstPad* gst_pad; GstCapsFeatures* feature; GstElement* input; if (!(m_pipeline = gst_pipeline_new("video-pipeline"))) { LOG_ERROR("Failed to create pipeline named video-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus(GST_PIPELINE(m_pipeline), true); input = m_config.input_type == VideoType::USB_CAMERE ? CreateV4l2src() : CreateUridecodebin(); if (!input) { LOG_ERROR("Can't process input source."); goto exit; } if (!(m_tee0 = gst_element_factory_make("tee", "tee0"))) { LOG_ERROR("Failed to create element tee0 named tee0"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_tee0, nullptr); if (m_config.input_type == VideoType::USB_CAMERE) { if (!gst_element_link_many(input, m_tee0, nullptr)) { LOG_ERROR("Failed to link jpegdec0->tee0"); goto exit; } } if (!(m_queue00 = gst_element_factory_make("queue", "queue00"))) { LOG_ERROR("Failed to create element queue named queue00"); goto exit; } // add probe to queue0 gst_pad = gst_element_get_static_pad(m_queue00, "src"); m_queue00_src_probe = gst_pad_add_probe(gst_pad, (GstPadProbeType)( GST_PAD_PROBE_TYPE_BUFFER), cb_queue0_probe, static_cast(this), nullptr); gst_object_unref(gst_pad); gst_bin_add_many(GST_BIN(m_pipeline), m_queue00, nullptr); if (!gst_element_link_many(m_tee0, m_queue00, nullptr)) { LOG_ERROR("Failed to link tee0->queue00"); goto exit; } if (!m_config.enable_hdmi && !m_config.enable_rtmp) { if (!(m_fakesink = gst_element_factory_make("fakesink", "fakesink0"))) { LOG_ERROR("Failed to create element fakesink named fakesink0"); goto exit; } g_object_set(G_OBJECT(m_fakesink), "sync", true, nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_fakesink, nullptr); if (!gst_element_link_many(m_queue00, m_fakesink, nullptr)) { LOG_ERROR("Failed to link queue00->fakesink0"); goto exit; } } else { if (!(m_tee1 = gst_element_factory_make("tee", "tee1"))) { LOG_ERROR("Failed to create element tee0 named tee1"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_tee1, nullptr); if (!gst_element_link_many(m_queue00, m_tee1, nullptr)) { LOG_ERROR("Failed to link queue00->tee1"); goto exit; } if (m_config.enable_hdmi) { if (!(m_queue10 = gst_element_factory_make("queue", "queue10"))) { LOG_ERROR("Failed to create element queue named queue10"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_queue10, nullptr); if (!(m_nveglglessink = gst_element_factory_make("nveglglessink", "nveglglessink0"))) { LOG_ERROR("Failed to create element nveglglessink named nveglglessink0"); goto exit; } g_object_set(G_OBJECT(m_nveglglessink), "sync", m_config.hdmi_sync, "window-x", m_config.window_x, "window-y", m_config.window_y, "window-width", m_config.window_width, "window-height", m_config.window_height, nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_nveglglessink, nullptr); if (!gst_element_link_many(m_tee1, m_queue10, m_nveglglessink, nullptr)) { LOG_ERROR("Failed to link tee1->queue10->nveglglessink0"); goto exit; } } if (m_config.enable_rtmp) { if (!(m_queue11 = gst_element_factory_make("queue", "queue11"))) { LOG_ERROR("Failed to create element queue named queue11"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_queue11, nullptr); if (!(m_nvvideoconvert0 = gst_element_factory_make("nvvideoconvert", "nvvideoconvert0"))) { LOG_ERROR("Failed to create element nvvideoconvert named nvvideoconvert0"); goto exit; } g_object_set(G_OBJECT(m_nvvideoconvert0), "nvbuf-memory-type", m_config.cvt_memory_type, nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_nvvideoconvert0, nullptr); cvt_caps = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "NV12", nullptr); feature = gst_caps_features_new("memory:NVMM", nullptr); gst_caps_set_features(cvt_caps, 0, feature); if (!(m_capfilter1 = gst_element_factory_make("capsfilter", "capfilter1"))) { LOG_ERROR("Failed to create element capsfilter named capfilter1"); goto exit; } g_object_set(G_OBJECT(m_capfilter1), "caps", cvt_caps, nullptr); gst_caps_unref(cvt_caps); gst_bin_add_many(GST_BIN(m_pipeline), m_capfilter1, nullptr); if (!(m_encoder = gst_element_factory_make("nvv4l2h264enc", "nvv4l2h264enc0"))) { LOG_ERROR("Failed to create element nvv4l2h264enc named nvv4l2h264enc0"); goto exit; } g_object_set(G_OBJECT(m_encoder), "bitrate", m_config.enc_bitrate, "iframeinterval", m_config.enc_iframe_interval, nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_encoder, nullptr); if (!(m_h264parse = gst_element_factory_make("h264parse", "h264parse0"))) { LOG_ERROR("Failed to create element h264parse named h264parse0"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_h264parse, nullptr); if (!(m_flvmux = gst_element_factory_make("flvmux", "flvmux0"))) { LOG_ERROR("Failed to create element flvmux named flvmux0"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_flvmux, nullptr); if (!(m_rtmpsink = gst_element_factory_make("rtmpsink", "rtmpsink"))) { LOG_ERROR("Failed to create element rtmpsink named rtmpsink0"); goto exit; } g_object_set(G_OBJECT(m_rtmpsink), "location", m_config.rtmp_uri.c_str(), nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_rtmpsink, nullptr); if (!gst_element_link_many(m_tee1, m_queue11, m_nvvideoconvert0, m_capfilter1, m_encoder, m_h264parse, m_flvmux, m_rtmpsink, nullptr)) { LOG_ERROR("Failed to link tee1->queue11->nvvideoconvert0->capfilter1->nvv4l2h264enc0->h264parse->flvmux0->rtmpsink0"); goto exit; } } } if (m_config.enable_appsink) { if (!(m_queue01 = gst_element_factory_make("queue", "queue01"))) { LOG_ERROR("Failed to create element queue named queue01"); goto exit; } gst_bin_add_many(GST_BIN(m_pipeline), m_queue01, nullptr); if (!(m_nvvideoconvert1 = gst_element_factory_make("nvvideoconvert", "nvvideoconvert1"))) { LOG_ERROR("Failed to create element nvvideoconvert named nvvideoconvert1"); goto exit; } g_object_set(G_OBJECT(m_nvvideoconvert1), "nvbuf-memory-type", m_config.cvt_memory_type, nullptr); gst_bin_add_many(GST_BIN(m_pipeline), m_nvvideoconvert1, nullptr); cvt_caps = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, m_config.cvt_format.c_str(), nullptr); feature = gst_caps_features_new("memory:NVMM", nullptr); gst_caps_set_features(cvt_caps, 0, feature); if (!(m_capfilter2 = gst_element_factory_make("capsfilter", "capfilter2"))) { LOG_ERROR("Failed to create element capsfilter named capfilter2"); goto exit; } g_object_set(G_OBJECT(m_capfilter2), "caps", cvt_caps, nullptr); gst_caps_unref(cvt_caps); gst_bin_add_many(GST_BIN(m_pipeline), m_capfilter2, nullptr); // gst_pad = gst_element_get_static_pad(m_nvvideoconvert1, "sink"); // m_cvt_sink_probe = gst_pad_add_probe(gst_pad, (GstPadProbeType)( // GST_PAD_PROBE_TYPE_BUFFER), cb_sync_before_buffer_probe, // static_cast(this), nullptr); // gst_object_unref(gst_pad); // gst_pad = gst_element_get_static_pad(m_nvvideoconvert1, "src"); // m_cvt_sink_probe = gst_pad_add_probe(gst_pad, (GstPadProbeType)( // GST_PAD_PROBE_TYPE_BUFFER), cb_sync_after_buffer_probe, // static_cast(this), nullptr); // gst_object_unref(gst_pad); if (!(m_appsink = gst_element_factory_make("appsink", "appsink"))) { LOG_ERROR("Failed to create element appsink named appsink"); goto exit; } g_object_set(m_appsink, "emit-signals", true, nullptr); g_signal_connect(m_appsink, "new-sample", G_CALLBACK(cb_appsink_new_sample), static_cast(this)); gst_bin_add_many(GST_BIN(m_pipeline), m_appsink, nullptr); if (!gst_element_link_many(m_tee0, m_queue01, m_nvvideoconvert1, m_capfilter2, m_appsink, nullptr)) { LOG_ERROR("Failed to link tee0->queue01->nvvideoconvert1->capfilter1->appsink"); goto exit; } } return true; exit: LOG_ERROR("Failed to create video pipeline"); return false; } bool VideoPipeline::Start(void) { LOG_INFO("Start pipeline called"); if (GST_STATE_CHANGE_FAILURE == gst_element_set_state(m_pipeline, GST_STATE_PLAYING)) { LOG_ERROR("Failed to set pipeline to playing state"); return false; } return true; } bool VideoPipeline::Pause(void) { GstState state, pending; LOG_INFO("Stop Pipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state( m_pipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state(m_pipeline, GST_STATE_PAUSED); gst_element_get_state(m_pipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN("Invalid state of pipeline {}", GST_STATE_CHANGE_ASYNC); return false; } } bool VideoPipeline::Resume(void) { GstState state, pending; LOG_INFO("Restart Pipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state( m_pipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state(m_pipeline, GST_STATE_PLAYING); gst_element_get_state(m_pipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN("Invalid state of pipeline{}", GST_STATE_CHANGE_ASYNC); return false; } } void VideoPipeline::Destroy(void) { GstPad* teeSrcPad; while(teeSrcPad = gst_element_get_request_pad(m_tee0, "src_%u")) { gst_element_release_request_pad(m_tee0, teeSrcPad); g_object_unref(teeSrcPad); } while(teeSrcPad = gst_element_get_request_pad(m_tee1, "src_%u")) { gst_element_release_request_pad(m_tee1, teeSrcPad); g_object_unref(teeSrcPad); } if (m_pipeline) { m_isExited = true; g_mutex_lock(&m_syncMuxtex); g_atomic_int_inc(&m_syncCount); g_cond_signal(&m_syncCondition); g_mutex_unlock(&m_syncMuxtex); gst_element_set_state(m_pipeline, GST_STATE_NULL); gst_object_unref(m_pipeline); m_pipeline = nullptr; } // if (m_cvt_sink_probe != -1 && m_nvvideoconvert1) { // GstPad *gstpad = gst_element_get_static_pad(m_nvvideoconvert1, "sink"); // if (!gstpad) { // LOG_ERROR("Could not find '{}' in '{}'", "sink", GST_ELEMENT_NAME(m_nvvideoconvert1)); // } // gst_pad_remove_probe(gstpad, m_cvt_sink_probe); // gst_object_unref(gstpad); // m_cvt_sink_probe = -1; // } // if (m_cvt_src_probe != -1 && m_nvvideoconvert1) { // GstPad *gstpad = gst_element_get_static_pad(m_nvvideoconvert1, "src"); // if (!gstpad) { // LOG_ERROR("Could not find '{}' in '{}'", "src", GST_ELEMENT_NAME(m_nvvideoconvert1)); // } // gst_pad_remove_probe(gstpad, m_cvt_src_probe); // gst_object_unref(gstpad); // m_cvt_src_probe = -1; // } if (m_dec_sink_probe != -1 && m_decoder) { GstPad *gstpad = gst_element_get_static_pad(m_decoder, "sink"); if (!gstpad) { LOG_ERROR("Could not find '{}' in '{}'", "sink", GST_ELEMENT_NAME(m_decoder)); } gst_pad_remove_probe(gstpad, m_dec_sink_probe); gst_object_unref(gstpad); m_dec_sink_probe = -1; } if (m_queue00_src_probe != -1 && m_queue00) { GstPad *gstpad = gst_element_get_static_pad(m_queue00, "src"); if (!gstpad) { LOG_ERROR("Could not find '{}' in '{}'", "src", GST_ELEMENT_NAME(m_queue00)); } gst_pad_remove_probe(gstpad, m_queue00_src_probe); gst_object_unref(gstpad); m_queue00_src_probe = -1; } g_mutex_clear(&m_mutex); g_mutex_clear(&m_syncMuxtex); g_cond_clear(&m_syncCondition); } void VideoPipeline::SetCallbacks(PutFrameFunc func, void* args) { LOG_INFO("set PutFrameFunc callback called"); m_putFrameFunc = func; m_putFrameArgs = args; } void VideoPipeline::SetCallbacks(GetResultFunc func, void* args) { LOG_INFO("set GetResultFunc callback called"); m_getResultFunc = func; m_getResultArgs = args; } void VideoPipeline::SetCallbacks(ProcResultFunc func) { LOG_INFO("set ProcResultFunc callback called"); m_procResultFunc = func; } ================================================ FILE: ai_integration/deepstream/src/main.cpp ================================================ /* * @Description: Test program of VideoPipeline. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2022-07-15 22:07:33 * @LastEditors: Ricardo Lu * @LastEditTime: 2023-02-06 20:45:00 */ #include #include #include #include #include #include #include #include #include "Common.h" #include "VideoPipeline.h" #include "DoubleBufferCache.h" static GMainLoop* g_main_loop = NULL; Json::Reader g_reader; static void Parse(VideoPipelineConfig& config, std::string& config_path) { Json::Value root; std::ifstream in(config_path, std::ios::binary); g_reader.parse(in, root); if (root.isMember("name")) { config.pipeline_id = root["name"].asString(); LOG_INFO("New pieline name: {}", config.pipeline_id); } if (root.isMember("input-config")) { Json::Value inputConfig = root["input-config"]; config.input_type = inputConfig["type"].asInt(); // 0-MP4 / 1-RTSP / 2-USB Camera LOG_INFO("Pipeline[{}]: type: {}", config.pipeline_id, config.input_type); config.src_uri = inputConfig["stream"]["uri"].asString(); LOG_INFO("Pipeline[{}]: input: {}", config.pipeline_id, config.src_uri); config.file_loop = inputConfig["stream"]["file-loop"].asBool(); LOG_INFO("Pipeline[{}]: file-loop: {}", config.pipeline_id, config.file_loop); config.rtsp_latency = inputConfig["stream"]["rtsp-latency"].asInt(); LOG_INFO("Pipeline[{}]: rtsp-latency: {}", config.pipeline_id, config.rtsp_latency); config.rtp_protocol = inputConfig["stream"]["rtp-protocol"].asInt(); LOG_INFO("Pipeline[{}]: rtp-protocol: {}", config.pipeline_id, config.rtp_protocol); config.src_device = inputConfig["usb-camera"]["device"].asString(); LOG_INFO("Pipeline[{}]: usb camera device: {}", config.pipeline_id, config.src_device); config.src_format = inputConfig["usb-camera"]["format"].asString(); LOG_INFO("Pipeline[{}]: usb camera output format: {}", config.pipeline_id, config.src_format); config.src_width = inputConfig["usb-camera"]["width"].asInt(); LOG_INFO("Pipeline[{}]: usb camera output width: {}", config.pipeline_id, config.src_width); config.src_height = inputConfig["usb-camera"]["height"].asInt(); LOG_INFO("Pipeline[{}]: usb camera output height: {}", config.pipeline_id, config.src_height); config.src_framerate_n = inputConfig["usb-camera"]["framerate-n"].asInt(); LOG_INFO("Pipeline[{}]: usb camera output height: {}", config.pipeline_id, config.src_framerate_n); config.src_framerate_d = inputConfig["usb-camera"]["framerate-d"].asInt(); LOG_INFO("Pipeline[{}]: usb camera output height: {}", config.pipeline_id, config.src_framerate_d); } if (root.isMember("output-config")) { Json::Value outputConfig = root["output-config"]; if (outputConfig.isMember("display")) { Json::Value displayConfig = outputConfig["display"]; config.enable_hdmi = displayConfig["enable"].asBool(); LOG_INFO("Pipeline[{}]: enable-hdmi: {}", config.pipeline_id, config.enable_hdmi); config.hdmi_sync = displayConfig["sync"].asBool(); LOG_INFO("Pipeline[{}]: hdmi-sync: {}", config.pipeline_id, config.hdmi_sync); config.window_x = displayConfig["left"].asInt(); LOG_INFO("Pipeline[{}]: window-x: {}", config.pipeline_id, config.window_x); config.window_y = displayConfig["top"].asInt(); LOG_INFO("Pipeline[{}]: window-y: {}", config.pipeline_id, config.window_y); config.window_width = displayConfig["width"].asInt(); LOG_INFO("Pipeline[{}]: window-width: {}", config.pipeline_id, config.window_width); config.window_height = displayConfig["height"].asInt(); LOG_INFO("Pipeline[{}]: window-height: {}", config.pipeline_id, config.window_height); } if (outputConfig.isMember("rtmp")) { Json::Value rtmpConfig = outputConfig["rtmp"]; config.enable_rtmp = rtmpConfig["enable"].asBool(); LOG_INFO("Pipeline[{}]: enable-rtmp: {}", config.pipeline_id, config.enable_rtmp); config.enc_bitrate = rtmpConfig["bitrate"].asInt(); LOG_INFO("Pipeline[{}]: encode-birtate: {}", config.pipeline_id, config.enc_bitrate); config.enc_iframe_interval = rtmpConfig["iframeinterval"].asInt(); LOG_INFO("Pipeline[{}]: encode-iframeinterval: {}", config.pipeline_id, config.enc_iframe_interval); config.rtmp_uri = rtmpConfig["uri"].asString(); LOG_INFO("Pipeline[{}]: rtmp-uri: {}", config.pipeline_id, config.rtmp_uri); } if (outputConfig.isMember("inference")) { Json::Value inferenceConfig = outputConfig["inference"]; config.enable_appsink = inferenceConfig["enable"].asBool(); LOG_INFO("Pipeline[{}]: enable-appsink: {}", config.pipeline_id, config.enable_appsink); config.cvt_memory_type = inferenceConfig["memory-type"].asInt(); LOG_INFO("Pipeline[{}]: videoconvert memory type: {}", config.pipeline_id, config.cvt_memory_type); config.cvt_format = inferenceConfig["format"].asString(); LOG_INFO("Pipeline[{}]: videoconvert format: {}", config.pipeline_id, config.cvt_format); } } } static bool validateConfigPath(const char* name, const std::string& value) { if (0 == value.compare ("")) { LOG_ERROR("You must specify a config file!"); return false; } struct stat statbuf; if (0 == stat(value.c_str(), &statbuf)) { return true; } LOG_ERROR("Can't stat model file: {}", value); return false; } DEFINE_string(config_path, "./pipeline.json", "Model config file path."); DEFINE_validator(config_path, &validateConfigPath); int main(int argc, char* argv[]) { google::ParseCommandLineFlags(&argc, &argv, true); VideoPipelineConfig m_vpConfig; VideoPipeline *m_vp; Parse(m_vpConfig, FLAGS_config_path); gst_init(&argc, &argv); g_setenv("GST_DEBUG_DUMP_DOT_DIR", "/home/ricardo/workSpace/gstreamer-example/ai_integration/deepstream/build", true); if (!(g_main_loop = g_main_loop_new(NULL, FALSE))) { LOG_ERROR("Failed to new a object with type GMainLoop"); goto exit; } m_vp = new VideoPipeline(m_vpConfig); if (!m_vp->Create()) { LOG_ERROR("Pipeline Create failed: lack of elements"); goto exit; } m_vp->Start(); g_main_loop_run(g_main_loop); exit: if (g_main_loop) g_main_loop_unref(g_main_loop); if (m_vp) { // m_vp->Destroy(); delete m_vp; m_vp = NULL; } google::ShutDownCommandLineFlags(); return 0; } ================================================ FILE: ai_integration/test_30fps.h264 ================================================ [File too large to display: 15.0 MB] ================================================ FILE: ai_integration/test_30fps.ts ================================================ [File too large to display: 23.5 MB] ================================================ FILE: ai_integration/video-pipeline.dot ================================================ digraph pipeline { rankdir=LR; fontname="sans"; fontsize="10"; labelloc=t; nodesep=.1; ranksep=.2; label="\nvideo-pipeline\n[>]"; node [style="filled,rounded", shape=box, fontsize="9", fontname="sans", margin="0.0,0.0"]; edge [labelfontsize="6", fontsize="9", fontname="monospace"]; legend [ pos="0,0!", margin="0.05,0.05", style="filled", label="Legend\lElement-States: [~] void-pending, [0] null, [-] ready, [=] paused, [>] playing\lPad-Activation: [-] none, [>] push, [<] pull\lPad-Flags: [b]locked, [f]lushing, [b]locking, [E]OS; upper-case is set\lPad-Task: [T] has started task, [t] has paused task\l", ]; subgraph cluster_appsink_0x55726e1fbc80 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstAppSink\nappsink\n[>]\nparent=(GstPipeline) video-pipeline\nlast-sample=((GstSample*) 0x7f59fc124260)\neos=FALSE\nemit-signals=TRUE"; subgraph cluster_appsink_0x55726e1fbc80_sink { label=""; style="invis"; appsink_0x55726e1fbc80_sink_0x55726e1fc820 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#aaaaff"; } subgraph cluster_capfilter1_0x55726d80e5b0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapfilter1\n[>]\nparent=(GstPipeline) video-pipeline\ncaps=video/x-raw(memory:NVMM), format=(string)RGBA"; subgraph cluster_capfilter1_0x55726d80e5b0_sink { label=""; style="invis"; capfilter1_0x55726d80e5b0_sink_0x55726e1fc380 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capfilter1_0x55726d80e5b0_src { label=""; style="invis"; capfilter1_0x55726d80e5b0_src_0x55726e1fc5d0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capfilter1_0x55726d80e5b0_sink_0x55726e1fc380 -> capfilter1_0x55726d80e5b0_src_0x55726e1fc5d0 [style="invis"]; fillcolor="#aaffaa"; } capfilter1_0x55726d80e5b0_src_0x55726e1fc5d0 -> appsink_0x55726e1fbc80_sink_0x55726e1fc820 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_videocvt1_0x55726e1f9df0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="Gstnvvideoconvert\nvideocvt1\n[>]\nparent=(GstPipeline) video-pipeline\nsrc-crop=\"0:0:0:0\"\ndest-crop=\"0:0:0:0\"\nnvbuf-memory-type=nvbuf-mem-cuda-unified"; subgraph cluster_videocvt1_0x55726e1f9df0_sink { label=""; style="invis"; videocvt1_0x55726e1f9df0_sink_0x55726d731d00 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_videocvt1_0x55726e1f9df0_src { label=""; style="invis"; videocvt1_0x55726e1f9df0_src_0x55726e1fc130 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } videocvt1_0x55726e1f9df0_sink_0x55726d731d00 -> videocvt1_0x55726e1f9df0_src_0x55726e1fc130 [style="invis"]; fillcolor="#aaffaa"; } videocvt1_0x55726e1f9df0_src_0x55726e1fc130 -> capfilter1_0x55726d80e5b0_sink_0x55726e1fc380 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_queue1_0x55726d734390 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQueue\nqueue1\n[>]\nparent=(GstPipeline) video-pipeline\ncurrent-level-buffers=4\ncurrent-level-bytes=256\ncurrent-level-time=133200000"; subgraph cluster_queue1_0x55726d734390_sink { label=""; style="invis"; queue1_0x55726d734390_sink_0x55726d731860 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_queue1_0x55726d734390_src { label=""; style="invis"; queue1_0x55726d734390_src_0x55726d731ab0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } queue1_0x55726d734390_sink_0x55726d731860 -> queue1_0x55726d734390_src_0x55726d731ab0 [style="invis"]; fillcolor="#aaffaa"; } queue1_0x55726d734390_src_0x55726d731ab0 -> videocvt1_0x55726e1f9df0_sink_0x55726d731d00 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_display_0x55726e1f43a0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstEglGlesSink\ndisplay\n[>]\nparent=(GstPipeline) video-pipeline\nmax-lateness=5000000\nqos=TRUE\nlast-sample=((GstSample*) 0x7f59fc124340)\nprocessing-deadline=15000000\nwindow-x=0\nwindow-y=0\nwindow-width=1920\nwindow-height=1080"; subgraph cluster_display_0x55726e1f43a0_sink { label=""; style="invis"; display_0x55726e1f43a0_sink_0x55726d731610 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#aaaaff"; } subgraph cluster_overlay_0x55726e13bc20 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstNvDsOsd\noverlay\n[>]\nparent=(GstPipeline) video-pipeline\nclock-font=NULL\nclock-font-size=0\nclock-color=0\nhw-blend-color-attr=\"3,1.000000,1.000000,0.000000,0.300000:\"\ndisplay-mask=FALSE"; subgraph cluster_overlay_0x55726e13bc20_sink { label=""; style="invis"; overlay_0x55726e13bc20_sink_0x55726d731170 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_overlay_0x55726e13bc20_src { label=""; style="invis"; overlay_0x55726e13bc20_src_0x55726d7313c0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } overlay_0x55726e13bc20_sink_0x55726d731170 -> overlay_0x55726e13bc20_src_0x55726d7313c0 [style="invis"]; fillcolor="#aaffaa"; } overlay_0x55726e13bc20_src_0x55726d7313c0 -> display_0x55726e1f43a0_sink_0x55726d731610 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_capfilter0_0x55726d80e270 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapfilter0\n[>]\nparent=(GstPipeline) video-pipeline\ncaps=video/x-raw(memory:NVMM), format=(string)RGBA"; subgraph cluster_capfilter0_0x55726d80e270_sink { label=""; style="invis"; capfilter0_0x55726d80e270_sink_0x55726d730cd0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capfilter0_0x55726d80e270_src { label=""; style="invis"; capfilter0_0x55726d80e270_src_0x55726d730f20 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capfilter0_0x55726d80e270_sink_0x55726d730cd0 -> capfilter0_0x55726d80e270_src_0x55726d730f20 [style="invis"]; fillcolor="#aaffaa"; } capfilter0_0x55726d80e270_src_0x55726d730f20 -> overlay_0x55726e13bc20_sink_0x55726d731170 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_videocvt0_0x55726d7c6980 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="Gstnvvideoconvert\nvideocvt0\n[>]\nparent=(GstPipeline) video-pipeline\nsrc-crop=\"0:0:0:0\"\ndest-crop=\"0:0:0:0\"\nnvbuf-memory-type=nvbuf-mem-cuda-unified"; subgraph cluster_videocvt0_0x55726d7c6980_sink { label=""; style="invis"; videocvt0_0x55726d7c6980_sink_0x55726d730830 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_videocvt0_0x55726d7c6980_src { label=""; style="invis"; videocvt0_0x55726d7c6980_src_0x55726d730a80 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } videocvt0_0x55726d7c6980_sink_0x55726d730830 -> videocvt0_0x55726d7c6980_src_0x55726d730a80 [style="invis"]; fillcolor="#aaffaa"; } videocvt0_0x55726d7c6980_src_0x55726d730a80 -> capfilter0_0x55726d80e270_sink_0x55726d730cd0 [label="video/x-raw(memory:NVMM)\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l format: RGBA\l block-linear: false\l"] subgraph cluster_queue0_0x55726d734090 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQueue\nqueue0\n[>]\nparent=(GstPipeline) video-pipeline\ncurrent-level-buffers=3\ncurrent-level-bytes=192\ncurrent-level-time=99900000"; subgraph cluster_queue0_0x55726d734090_sink { label=""; style="invis"; queue0_0x55726d734090_sink_0x55726d730390 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_queue0_0x55726d734090_src { label=""; style="invis"; queue0_0x55726d734090_src_0x55726d7305e0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } queue0_0x55726d734090_sink_0x55726d730390 -> queue0_0x55726d734090_src_0x55726d7305e0 [style="invis"]; fillcolor="#aaffaa"; } queue0_0x55726d734090_src_0x55726d7305e0 -> videocvt0_0x55726d7c6980_sink_0x55726d730830 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_tee0_0x55726d72e000 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstTee\ntee0\n[>]\nparent=(GstPipeline) video-pipeline\nnum-src-pads=2"; subgraph cluster_tee0_0x55726d72e000_sink { label=""; style="invis"; tee0_0x55726d72e000_sink_0x55726d730140 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_tee0_0x55726d72e000_src { label=""; style="invis"; tee0_0x55726d72e000_src_0_0x55726d7282e0 [color=black, fillcolor="#ffaaaa", label="src_0\n[>][bfb]", height="0.2", style="filled,dashed"]; tee0_0x55726d72e000_src_1_0x55726d728540 [color=black, fillcolor="#ffaaaa", label="src_1\n[>][bfb]", height="0.2", style="filled,dashed"]; } tee0_0x55726d72e000_sink_0x55726d730140 -> tee0_0x55726d72e000_src_0_0x55726d7282e0 [style="invis"]; fillcolor="#aaffaa"; } tee0_0x55726d72e000_src_0_0x55726d7282e0 -> queue0_0x55726d734090_sink_0x55726d730390 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] tee0_0x55726d72e000_src_1_0x55726d728540 -> queue1_0x55726d734390_sink_0x55726d731860 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_uri_0x55726d728060 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstURIDecodeBin\nuri\n[>]\nparent=(GstPipeline) video-pipeline\nuri=\"file:///home/ricardo/workSpace/gstreamer-example/ai_integration/test.mp4\"\nsource=(GstFileSrc) source\ncaps=video/x-raw(ANY); audio/x-raw(ANY); text/x-raw(ANY); subpicture/x-dvd; subpictur…"; subgraph cluster_uri_0x55726d728060_src { label=""; style="invis"; _proxypad4_0x55726d729d10 [color=black, fillcolor="#ffdddd", label="proxypad4\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad4_0x55726d729d10 -> uri_0x55726d728060_src_0_0x55726f06eaf0 [style=dashed, minlen=0] uri_0x55726d728060_src_0_0x55726f06eaf0 [color=black, fillcolor="#ffdddd", label="src_0\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad5_0x7f5a0032c130 [color=black, fillcolor="#ffdddd", label="proxypad5\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad5_0x7f5a0032c130 -> uri_0x55726d728060_src_1_0x55726f06ed70 [style=dashed, minlen=0] uri_0x55726d728060_src_1_0x55726f06ed70 [color=black, fillcolor="#ffdddd", label="src_1\n[>][bfb]", height="0.2", style="filled,dotted"]; } fillcolor="#ffffff"; subgraph cluster_decodebin0_0x55726f06c090 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstDecodeBin\ndecodebin0\n[>]\nparent=(GstURIDecodeBin) uri\ncaps=video/x-raw(ANY); audio/x-raw(ANY); text/x-raw(ANY); subpicture/x-dvd; subpictur…"; subgraph cluster_decodebin0_0x55726f06c090_sink { label=""; style="invis"; _proxypad0_0x55726d7287b0 [color=black, fillcolor="#ddddff", label="proxypad0\n[<][bfb]", height="0.2", style="filled,solid"]; decodebin0_0x55726f06c090_sink_0x55726f06e0f0 -> _proxypad0_0x55726d7287b0 [style=dashed, minlen=0] decodebin0_0x55726f06c090_sink_0x55726f06e0f0 [color=black, fillcolor="#ddddff", label="sink\n[<][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_decodebin0_0x55726f06c090_src { label=""; style="invis"; _proxypad2_0x55726d728a10 [color=black, fillcolor="#ffdddd", label="proxypad2\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad2_0x55726d728a10 -> decodebin0_0x55726f06c090_src_0_0x7f5a080320a0 [style=dashed, minlen=0] decodebin0_0x55726f06c090_src_0_0x7f5a080320a0 [color=black, fillcolor="#ffdddd", label="src_0\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad3_0x55726d729390 [color=black, fillcolor="#ffdddd", label="proxypad3\n[>][bfb]", height="0.2", style="filled,dotted"]; _proxypad3_0x55726d729390 -> decodebin0_0x55726f06c090_src_1_0x7f5a08032b20 [style=dashed, minlen=0] decodebin0_0x55726f06c090_src_1_0x7f5a08032b20 [color=black, fillcolor="#ffdddd", label="src_1\n[>][bfb]", height="0.2", style="filled,dotted"]; } decodebin0_0x55726f06c090_sink_0x55726f06e0f0 -> decodebin0_0x55726f06c090_src_0_0x7f5a080320a0 [style="invis"]; fillcolor="#ffffff"; subgraph cluster_nvv4l2decoder0_0x7f5a00018ee0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="nvv4l2decoder\nnvv4l2decoder0\n[>]\nparent=(GstDecodeBin) decodebin0\ndevice=\"/dev/nvidia0\"\ndevice-name=\"\"\ndevice-fd=31\ndrop-frame-interval=0\nnum-extra-surfaces=0"; subgraph cluster_nvv4l2decoder0_0x7f5a00018ee0_sink { label=""; style="invis"; nvv4l2decoder0_0x7f5a00018ee0_sink_0x7f59fc132410 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_nvv4l2decoder0_0x7f5a00018ee0_src { label=""; style="invis"; nvv4l2decoder0_0x7f5a00018ee0_src_0x7f59fc132660 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb][T]", height="0.2", style="filled,solid"]; } nvv4l2decoder0_0x7f5a00018ee0_sink_0x7f59fc132410 -> nvv4l2decoder0_0x7f5a00018ee0_src_0x7f59fc132660 [style="invis"]; fillcolor="#aaffaa"; } nvv4l2decoder0_0x7f5a00018ee0_src_0x7f59fc132660 -> _proxypad2_0x55726d728a10 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] subgraph cluster_avdec_aac0_0x7f59fc1314d0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="avdec_aac\navdec_aac0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_avdec_aac0_0x7f59fc1314d0_sink { label=""; style="invis"; avdec_aac0_0x7f59fc1314d0_sink_0x7f59fc00b8f0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_avdec_aac0_0x7f59fc1314d0_src { label=""; style="invis"; avdec_aac0_0x7f59fc1314d0_src_0x7f59fc00bb40 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } avdec_aac0_0x7f59fc1314d0_sink_0x7f59fc00b8f0 -> avdec_aac0_0x7f59fc1314d0_src_0x7f59fc00bb40 [style="invis"]; fillcolor="#aaffaa"; } avdec_aac0_0x7f59fc1314d0_src_0x7f59fc00bb40 -> _proxypad3_0x55726d729390 [label="audio/x-raw\l format: F32LE\l layout: non-interleaved\l rate: 48000\l channels: 2\l channel-mask: 0x0000000000000003\l"] subgraph cluster_aacparse0_0x7f59fc0900f0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstAacParse\naacparse0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_aacparse0_0x7f59fc0900f0_sink { label=""; style="invis"; aacparse0_0x7f59fc0900f0_sink_0x7f59fc00b450 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_aacparse0_0x7f59fc0900f0_src { label=""; style="invis"; aacparse0_0x7f59fc0900f0_src_0x7f59fc00b6a0 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } aacparse0_0x7f59fc0900f0_sink_0x7f59fc00b450 -> aacparse0_0x7f59fc0900f0_src_0x7f59fc00b6a0 [style="invis"]; fillcolor="#aaffaa"; } aacparse0_0x7f59fc0900f0_src_0x7f59fc00b6a0 -> avdec_aac0_0x7f59fc1314d0_sink_0x7f59fc00b8f0 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_capsfilter0_0x55726d80ef70 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstCapsFilter\ncapsfilter0\n[>]\nparent=(GstDecodeBin) decodebin0\ncaps=video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, parsed=(b…"; subgraph cluster_capsfilter0_0x55726d80ef70_sink { label=""; style="invis"; capsfilter0_0x55726d80ef70_sink_0x7f59fc00a8c0 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_capsfilter0_0x55726d80ef70_src { label=""; style="invis"; capsfilter0_0x55726d80ef70_src_0x7f59fc00ab10 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } capsfilter0_0x55726d80ef70_sink_0x7f59fc00a8c0 -> capsfilter0_0x55726d80ef70_src_0x7f59fc00ab10 [style="invis"]; fillcolor="#aaffaa"; } capsfilter0_0x55726d80ef70_src_0x7f59fc00ab10 -> nvv4l2decoder0_0x7f5a00018ee0_sink_0x7f59fc132410 [label="video/x-h264\l stream-format: byte-stream\l alignment: au\l level: 4.2\l profile: high\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l interlace-mode: progressive\l chroma-format: 4:2:0\l bit-depth-luma: 8\l bit-depth-chroma: 8\l parsed: true\l"] subgraph cluster_h264parse0_0x7f59fc0108a0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstH264Parse\nh264parse0\n[>]\nparent=(GstDecodeBin) decodebin0\nconfig-interval=-1"; subgraph cluster_h264parse0_0x7f59fc0108a0_sink { label=""; style="invis"; h264parse0_0x7f59fc0108a0_sink_0x7f59fc00a420 [color=black, fillcolor="#aaaaff", label="sink\n[>][bfb]", height="0.2", style="filled,solid"]; } subgraph cluster_h264parse0_0x7f59fc0108a0_src { label=""; style="invis"; h264parse0_0x7f59fc0108a0_src_0x7f59fc00a670 [color=black, fillcolor="#ffaaaa", label="src\n[>][bfb]", height="0.2", style="filled,solid"]; } h264parse0_0x7f59fc0108a0_sink_0x7f59fc00a420 -> h264parse0_0x7f59fc0108a0_src_0x7f59fc00a670 [style="invis"]; fillcolor="#aaffaa"; } h264parse0_0x7f59fc0108a0_src_0x7f59fc00a670 -> capsfilter0_0x55726d80ef70_sink_0x7f59fc00a8c0 [label="video/x-h264\l stream-format: byte-stream\l alignment: au\l level: 4.2\l profile: high\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l framerate: 20000/333\l interlace-mode: progressive\l chroma-format: 4:2:0\l bit-depth-luma: 8\l bit-depth-chroma: 8\l parsed: true\l"] subgraph cluster_multiqueue0_0x7f59fc00d060 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstMultiQueue\nmultiqueue0\n[>]\nparent=(GstDecodeBin) decodebin0\nmax-size-bytes=2097152\nmax-size-time=0"; subgraph cluster_multiqueue0_0x7f59fc00d060_sink { label=""; style="invis"; multiqueue0_0x7f59fc00d060_sink_0_0x55726e1fdcf0 [color=black, fillcolor="#aaaaff", label="sink_0\n[>][bfb]", height="0.2", style="filled,dashed"]; multiqueue0_0x7f59fc00d060_sink_1_0x7f59fc00afb0 [color=black, fillcolor="#aaaaff", label="sink_1\n[>][bfb]", height="0.2", style="filled,dashed"]; } subgraph cluster_multiqueue0_0x7f59fc00d060_src { label=""; style="invis"; multiqueue0_0x7f59fc00d060_src_0_0x7f59fc00a1d0 [color=black, fillcolor="#ffaaaa", label="src_0\n[>][bfb][T]", height="0.2", style="filled,dotted"]; multiqueue0_0x7f59fc00d060_src_1_0x7f59fc00b200 [color=black, fillcolor="#ffaaaa", label="src_1\n[>][bfb][T]", height="0.2", style="filled,dotted"]; } multiqueue0_0x7f59fc00d060_sink_0_0x55726e1fdcf0 -> multiqueue0_0x7f59fc00d060_src_0_0x7f59fc00a1d0 [style="invis"]; fillcolor="#aaffaa"; } multiqueue0_0x7f59fc00d060_src_0_0x7f59fc00a1d0 -> h264parse0_0x7f59fc0108a0_sink_0x7f59fc00a420 [label="video/x-h264\l stream-format: avc\l alignment: au\l level: 4.2\l profile: high\l codec_data: 0164002affe10018676400...\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l"] multiqueue0_0x7f59fc00d060_src_1_0x7f59fc00b200 -> aacparse0_0x7f59fc0900f0_sink_0x7f59fc00b450 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_qtdemux0_0x7f5a0807e140 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstQTDemux\nqtdemux0\n[>]\nparent=(GstDecodeBin) decodebin0"; subgraph cluster_qtdemux0_0x7f5a0807e140_sink { label=""; style="invis"; qtdemux0_0x7f5a0807e140_sink_0x55726e1fd160 [color=black, fillcolor="#aaaaff", label="sink\n[<][bfb][T]", height="0.2", style="filled,solid"]; } subgraph cluster_qtdemux0_0x7f5a0807e140_src { label=""; style="invis"; qtdemux0_0x7f5a0807e140_video_0_0x55726e1fdaa0 [color=black, fillcolor="#ffaaaa", label="video_0\n[>][bfb]", height="0.2", style="filled,dotted"]; qtdemux0_0x7f5a0807e140_audio_0_0x7f59fc00ad60 [color=black, fillcolor="#ffaaaa", label="audio_0\n[>][bfb]", height="0.2", style="filled,dotted"]; } qtdemux0_0x7f5a0807e140_sink_0x55726e1fd160 -> qtdemux0_0x7f5a0807e140_video_0_0x55726e1fdaa0 [style="invis"]; fillcolor="#aaffaa"; } qtdemux0_0x7f5a0807e140_video_0_0x55726e1fdaa0 -> multiqueue0_0x7f59fc00d060_sink_0_0x55726e1fdcf0 [label="video/x-h264\l stream-format: avc\l alignment: au\l level: 4.2\l profile: high\l codec_data: 0164002affe10018676400...\l width: 1920\l height: 1080\l pixel-aspect-ratio: 1/1\l"] qtdemux0_0x7f5a0807e140_audio_0_0x7f59fc00ad60 -> multiqueue0_0x7f59fc00d060_sink_1_0x7f59fc00afb0 [label="audio/mpeg\l mpegversion: 4\l framed: true\l stream-format: raw\l level: 2\l base-profile: lc\l profile: lc\l codec_data: 1190\l rate: 48000\l channels: 2\l"] subgraph cluster_typefind_0x55726f3c40b0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstTypeFindElement\ntypefind\n[>]\nparent=(GstDecodeBin) decodebin0\ncaps=video/quicktime, variant=(string)iso"; subgraph cluster_typefind_0x55726f3c40b0_sink { label=""; style="invis"; typefind_0x55726f3c40b0_sink_0x55726e1fccc0 [color=black, fillcolor="#aaaaff", label="sink\n[<][bfb][t]", height="0.2", style="filled,solid"]; } subgraph cluster_typefind_0x55726f3c40b0_src { label=""; style="invis"; typefind_0x55726f3c40b0_src_0x55726e1fcf10 [color=black, fillcolor="#ffaaaa", label="src\n[<][bfb]", height="0.2", style="filled,solid"]; } typefind_0x55726f3c40b0_sink_0x55726e1fccc0 -> typefind_0x55726f3c40b0_src_0x55726e1fcf10 [style="invis"]; fillcolor="#aaffaa"; } _proxypad0_0x55726d7287b0 -> typefind_0x55726f3c40b0_sink_0x55726e1fccc0 [label="ANY"] typefind_0x55726f3c40b0_src_0x55726e1fcf10 -> qtdemux0_0x7f5a0807e140_sink_0x55726e1fd160 [labeldistance="10", labelangle="0", label=" ", taillabel="ANY", headlabel="video/quicktime\lvideo/mj2\laudio/x-m4a\lapplication/x-3gp\l"] } decodebin0_0x55726f06c090_src_0_0x7f5a080320a0 -> _proxypad4_0x55726d729d10 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] decodebin0_0x55726f06c090_src_1_0x7f5a08032b20 -> _proxypad5_0x7f5a0032c130 [label="audio/x-raw\l format: F32LE\l layout: non-interleaved\l rate: 48000\l channels: 2\l channel-mask: 0x0000000000000003\l"] subgraph cluster_source_0x55726e7243e0 { fontname="Bitstream Vera Sans"; fontsize="8"; style="filled,rounded"; color=black; label="GstFileSrc\nsource\n[>]\nparent=(GstURIDecodeBin) uri\nlocation=\"/home/ricardo/workSpace/gstreamer-example/ai_integration/test.mp4\""; subgraph cluster_source_0x55726e7243e0_src { label=""; style="invis"; source_0x55726e7243e0_src_0x55726e1fca70 [color=black, fillcolor="#ffaaaa", label="src\n[<][bfb]", height="0.2", style="filled,solid"]; } fillcolor="#ffaaaa"; } source_0x55726e7243e0_src_0x55726e1fca70 -> decodebin0_0x55726f06c090_sink_0x55726f06e0f0 [label="ANY"] } uri_0x55726d728060_src_0_0x55726f06eaf0 -> tee0_0x55726d72e000_sink_0x55726d730140 [label="video/x-raw(memory:NVMM)\l format: NV12\l width: 1920\l height: 1080\l interlace-mode: progressive\l multiview-mode: mono\l multiview-flags: 0:ffffffff:/right-view...\l pixel-aspect-ratio: 1/1\l chroma-site: mpeg2\l colorimetry: bt709\l framerate: 20000/333\l"] } ================================================ FILE: application_develop/GstPadProbe/CMakeLists.txt ================================================ # created by Ricardo Lu in 08/29/2021 cmake_minimum_required(VERSION 3.10) project(GstPadProbe) set(CMAKE_CXX_STANDARD 11) set(OpenCV_DIR "/opt/thundersoft/opencv-4.2.0/lib/cmake/opencv4") find_package(OpenCV REQUIRED) include(FindPkgConfig) pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) pkg_check_modules(GLIB REQUIRED glib-2.0) pkg_check_modules(GFLAGS REQUIRED gflags) include_directories( ${PROJECT_SOURCE_DIR}/inc ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ${GLIB_INCLUDE_DIRS} ${GFLAGS_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ${GLIB_LIBRARY_DIRS} ${GFLAGS_LIBRARY_DIRS} ${OpenCV_LIBRARY_DIRS} ) OPTION(COMPILE_FILE_SOURCE "build filesrc" OFF) OPTION(COMPILE_RTSP_SOURCE "build rtspsrc" OFF) if(COMPILE_FILE_SOURCE) add_definitions(-DFILE_SOURCE) endif(COMPILE_FILE_SOURCE) if(COMPILE_RTSP_SOURCE) add_definitions(-DRTSP_SOURCE) endif(COMPILE_RTSP_SOURCE) add_executable(${PROJECT_NAME} src/VideoPipeline.cpp src/main.cpp ) target_link_libraries(${PROJECT_NAME} ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ${OpenCV_LIBRARIES} qtimlmeta ) ================================================ FILE: application_develop/GstPadProbe/README.md ================================================ # GstPadProbe GstPad上发生的数据流、事件和查询可以通过探针进行监控,探针通过`gst_pad_add_probe()`安装,这为开发者提供了另外一种访问GStreamer pipeline数据的方式。 **教程地址:[GstPadProbe](https://ricardolu.gitbook.io/gstreamer/application-development/gstpadprobe)** 参考文档: - [GstPad](https://gstreamer.freedesktop.org/documentation/gstreamer/gstpad.html) - [Basic tutorial 7: Multithreading and Pad Availability](https://gstreamer.freedesktop.org/documentation/tutorials/basic/multithreading-and-pad-availability.html?gi-language=c) - [Buffers not writable after tee](https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/609) ## build & run ```shell mkdir build cd build cmake .. -DCOMPILE_RTSP_SOURCE=ON -DCOMPILE_FILE_SOURCE=OFF make # rtspsrc ./GstPadProbe --srcuri rtsp://admin:1234@10.0.23.227:554 cmake .. -DCOMPILE_RTSP_SOURCE=OFF -DCOMPILE_FILE_SOURCE=ON make # filesrc ./GstPadProbe --srcuri /user/local/gstreamer-example/application_develop/video.mp4 ``` ================================================ FILE: application_develop/GstPadProbe/doc/gstpadprobe.md ================================================ # GstPadProbe 在[GStreamer-APP](https://ricardolu.gitbook.io/gstreamer/application-development/app)章节讲到了应用程序和GStreamer pipeline进行数据方式的一种方式,并且在示例中,使用`appsink`完成了从pipeline中取图像数据绘制,并把绘制后的图像经由`appsrc`重新送回pipeline中,这是目前基于GStreamer框架开发的应用程序最简单的一种架构。但是需要注意的是这里的`appsink`和`appsrc`实际上是两条pipeline,使用起来非常麻烦。在这篇教程中我将展示如何使用类似于[Basic tutorial 7: Multithreading and Pad Availability](https://ricardolu.gitbook.io/gstreamer/basic-theory/basic-tutorial-7-multithreading-and-pad-availability)中一样的example pipeline来实现相同的目标。 ## GstPadProbe GstElement实际是通过GstPad完成连接,这是一种非常轻量的原始链接点。数据在GstPad之间进行传递, ### gst_pad_add_probe() ```c gulong gst_pad_add_probe (GstPad * pad, GstPadProbeType mask, GstPadProbeCallback callback, gpointer user_data, GDestroyNotify destroy_data) ``` 在pad状态发生改变的时候发出通知,为匹配掩码的每个状态调用提供的回调函数。 `pad`:添加probe的GstPad `mask`:probe的掩码,详细参考[GstPadProbeType](https://gstreamer.freedesktop.org/documentation/gstreamer/gstpad.html?gi-language=c#GstPadProbeType) `callback`:回调函数指针 `user_data`:用户传递给回调的数据 `destroy_data`: 返回值是一个无符号整型id,用于标识probe,`gst_pad_remove_probe()`释放probe用. ### GstPadProbeCallback ```c GstPadProbeReturn (*GstPadProbeCallback) (GstPad * pad, GstPadProbeInfo * info, gpointer user_data) ``` pad的对应状态下调用的probe回调函数,可以修改`info`指向的数据。 ### GstPadProbeInfo ```c struct _GstPadProbeInfo { GstPadProbeType type; gulong id; gpointer data; guint64 offset; guint size; /*< private >*/ union { gpointer _gst_reserved[GST_PADDING]; struct { GstFlowReturn flow_ret; } abi; } ABI; }; ``` `data`根据不同的probe type具有不同的类型,可以直接操作`data`指针,也可以通过GstPadProbeInfo提供的借口获取其下的数据。常用的有`gst_pad_probe_info_get_buffer()`,用于获取经过pad的GstBuffer。 ## Pipeline ### Overview ![SampleFrame](images/gstpadprobe/SampleFrame.png) 在[GStreamer-APP](https://ricardolu.gitbook.io/gstreamer/application-development/app)教程的实例中,我们通过`appsink`将GstBuffer传递到用户空间然后使用OpenCV绘制了一个红色的矩形框和appsink字符串,并且通过`appsrc`的回调中绘制了一个绿色的矩形框和appsrc字符串,最后将绘制后的cv::Mat转为GstBuffer送回pipeline中并用`waylandsink`显示在屏幕上。 开头说过,`appsink`和`appsrc`各为一条pipeline,为了程序的正常运行,需要用户自行维护两条pipeline的数据同步,这是一个令人头疼的问题,并且为了画图总共发生了两次内存拷贝,这在应用中将占用一部分CPU性能。在本教程中,我们通过在`queue0`的`src pad`中注册一个`GST_PAD_PROBE_TYPE_BUFFER`类型的probe回调,取出经过`queue0`的GstBuffer并将要绘制的内容直接加到该buffer的metadata中,使用`qtioverlay`完成了相关内容的绘制。 ### qtioverlay `qtioverlay`是高通平台上的一个Overlay插件,内部依赖`metadata`调用C2D库完成了在`NV12`图像上bounding box和一个简单的bbox text的绘制,为了支持动态修改overlay color,我为其添加了一个`meta-color`的property,有关修改和使用的详情请阅读[Qualcomm-gst-plugin: qtioverlay](https://ricardolu.gitbook.io/gstreamer/qualcomm-gstreamer-plugins/qtioverlay)。 **注:**假如只需要在`NV12`图像上画矩形框,[draw-yuv-rectangle](https://github.com/gesanqiu/draw-rectangle-on-YUV)库实现了相同的功能。 ## Issue ### tee的request-pad 在[Basic tutorial 7: Multithreading and Pad Availability](https://ricardolu.gitbook.io/gstreamer/basic-theory/basic-tutorial-7-multithreading-and-pad-availability)中使用了`gst_element_request_pad_simple()`向`tee`请求生成的`src-pad`,并且使用`gst_element_release_request_pad()`释放请求的GstPad。 ```c GstPad * gst_element_request_pad_simple (GstElement * element, const gchar * name) ``` 但是需要注意的是`gst_element_request_pad_simple()`是在GStreamer-1.20之后才引入的新特性,旧的版本应该使用`gst_element_get_request_pad()`来申请。 ### GstBuffer isn't writable - `cb_queue0_probe()` ```shell (GstPadProbe:9069): GStreamer-CRITICAL **: 14:05:03.871: gst_buffer_add_meta: assertion 'gst_buffer_is_writable (buffer)' failed ``` 1. 等待GstBuffer同步 在[Buffers not writable after tee](https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/609)这个关于`tee`的issue中提到: > If there are multiple references to a single buffer, writing while another thread may be reading results in data corruption. > > 假如传递的buffer存在多个引用,在一个线程读buffer的同时在另一个线程中执行写buffer操作会引起竞争。 这句话的核心在于pipeline的多个分支线程中维护的其实是同一个buffer的不同引用,这是建立在`tee`插件只做了浅拷贝而不是深拷贝的基础上的,官方对于`tee`的说明其实比较含糊,只提到`Split data to multiple pads. `用的是split而不是copy也不是reference,所以我也并不确定`tee`的底层机制。 假如基于`tee`只是增加引用计数的思路来考虑,这就意味着display branch和appsink branch使用的是同一个GstBuffer的不同引用,也就是当`cb_queue0_probe()`请求访问probe buffer的时候,`qtivtransform`有可能正在对这个buffer进行读写操作,这时候为了线程安全自然应该上锁,所以probe buffer不可写。 因此我的解决思路是给`qtivtransform`的`src-pad`也加一个probe,当GstBuffer到达`src-pad`时说明`qtivtransform`的操作已经完成,这时进行一个unlock通知`cb_queue0_probe`取buffer并进行相关操作即可。 ```c // sync if (info->type & GST_PAD_PROBE_TYPE_BUFFER && !vp->isExited) { g_mutex_lock (&vp->m_syncMuxtex); while (g_atomic_int_get (&vp->m_syncCount) <= 0) g_cond_wait (&vp->m_syncCondition, &vp->m_syncMuxtex); if (!g_atomic_int_dec_and_test (&vp->m_syncCount)) { //LOG_INFO_MSG ("m_syncCount:%d/%d", vp->m_syncCount, // vp->pipeline_id_); } g_mutex_unlock (&vp->m_syncMuxtex); } // osd the result if (vp->m_getResultFunc) { const std::shared_ptr result = vp->m_getResultFunc (vp->m_getResultArgs); if (result && vp->m_procDataFunc) { vp->m_procDataFunc (buffer, result); } } ``` 2. `gst_buffer_make_writable()` 查看GstBffer文档可以知道,我们还可以通过`gst_buffer_make_writable()`来拷贝一份buffer,使得buffer可写,而且假如原buffer已经可写,那么这个调用只是简单的返回,拷贝并不会发生,因此不会造成过多的性能损耗。 buffer操作完之后再使用`gst_pad_push()`将buffer传递给与`srd pad`连接的下一个插件的`sink pad`中即可。 ```c++ buffer = gst_buffer_make_writable (buffer); // osd the result if (vp->m_getResultFunc) { const std::shared_ptr result = vp->m_getResultFunc (vp->m_getResultArgs); if (result && vp->m_procDataFunc) { vp->m_procDataFunc (buffer, result); } } gst_pad_push (pad, buffer); ``` ## Summary 至此我相信读者已经具备了开发自己的pipeline的能力,作为一个嵌入式平台的开发者,性能永远是第一目标,因此在实际使用中pipeline的架构需要反复斟酌优化。事实上通过GStreamer-APP和GstPadProbe两个例子,应该已经具备了初步的优化意识,关于架构优化,欢迎读者按顺序阅读下面三个repo的README,它记录了我基于GStreamer框架下的一个yolov3物体识别算法视频应用的从诞生到完善的完整流程: [Ericsson-Yolov3-SNPE](https://github.com/gesanqiu/Ericsson-Yolov3-SNPE) [Gst-AIDemo-Optimize](https://github.com/gesanqiu/Gst-AIDemo-Optimize) [yolov3-thread-pool](https://github.com/gesanqiu/yolov3-thread-pool) 希望能给各位一些启发。 ================================================ FILE: application_develop/GstPadProbe/inc/Common.h ================================================ /* * @Description: Common Utils. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:24:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 13:31:00 */ #pragma once #include #include #include #include #include #include #include #include #define LOG_ERROR_MSG(msg, ...) \ g_print("** ERROR: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_INFO_MSG(msg, ...) \ g_print("** INFO: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_WARN_MSG(msg, ...) \ g_print("** WARN: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) // callback functions typedef std::function, void*)> SinkPutDataFunc; typedef std::function(void*)> SrcGetDataFunc; typedef std::function(void*)> ProbeGetResultFunc; typedef std::function&)> ProcDataFunc; ================================================ FILE: application_develop/GstPadProbe/inc/DoubleBufferCache.h ================================================ /* * @Description: Double Buffer Cache Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-29 08:51:01 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 12:39:35 */ #pragma once #include #include #include #include /** @brief Shared-buffer cache manager. * * */ template class DoubleBufCache { public: /** @brief constructor * @param[in] notify_func When a new buffer is fed, it triggers the function handle. * */ DoubleBufCache(std::function notify_func = std::function{nullptr}) noexcept : swap_ready(false) { this->notify_func = notify_func; } /** @brief deconstructor * */ ~DoubleBufCache() noexcept { if (!debug_info.empty() ) { printf("DoubleBufCache %s destroyed.", debug_info.c_str()); } } /** @brief Put the latest buffer into cache queue to be processed. * * Giving up control of previous front buffer. * @param[in] The latest buffer. * */ void feed(std::shared_ptr pending) { if (nullptr == pending.get()) { throw "ERROR: feed an empty buffer to DoubleBufCache"; } swap_mtx.lock(); front_sp = pending; swap_mtx.unlock(); swap_ready = true; if (notify_func) { notify_func(); } return; } /** @brief Get the front buffer. * @return Front buffer. * */ std::shared_ptr front() noexcept { return front_sp; } /** @brief Fetch the shared back buffer. * @return Back buffer. * */ std::shared_ptr fetch() noexcept { if (swap_ready) { swap_mtx.lock(); back_sp = front_sp; swap_mtx.unlock(); swap_ready = false; } return back_sp; } private: //! Notification function will be called, if a new buffer fed. std::function notify_func; //! The buffer cache can be swapped if the flag is equal to true. std::atomic swap_ready; //! Swapping mutex lock for thread safety. std::mutex swap_mtx; //! Front buffer for previous results saving. std::shared_ptr front_sp; //! Back buffer to be fetched. std::shared_ptr back_sp; public: //! Indicate the name of an instantiated object for debug. std::string debug_info; }; ================================================ FILE: application_develop/GstPadProbe/inc/VideoPipeline.h ================================================ /* * @Description: GstPipeline common header. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 08:11:39 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-10 03:26:45 */ #pragma once #include "Common.h" typedef struct _VideoPipelineConfig { std::string src; /*-------------qtivtransform-------------*/ std::string conv_format; int conv_width; int conv_height; }VideoPipelineConfig; class VideoPipeline { public: VideoPipeline (const VideoPipelineConfig& config); bool Create (void); bool Start (void); bool Pause (void); bool Resume (void); void Destroy (void); void SetCallbacks (SinkPutDataFunc func, void* args); void SetCallbacks (ProbeGetResultFunc func, void* args); void SetCallbacks (ProcDataFunc func, void* args); ~VideoPipeline (void); public: SinkPutDataFunc m_putDataFunc; void* m_putDataArgs; ProbeGetResultFunc m_getResultFunc; void* m_getResultArgs; ProcDataFunc m_procDataFunc; void* m_procDataArgs; unsigned long m_queue0_probe; unsigned long m_trans_sink_probe; unsigned long m_trans_src_probe; VideoPipelineConfig m_config; GstElement* m_gstPipeline; volatile gint m_syncCount; volatile gboolean isExited; GMutex m_syncMuxtex; GCond m_syncCondition; GMutex m_mutex; GstElement* m_source; GstElement* m_qtdemux; GstElement* m_rtph264depay; GstElement* m_h264parse; GstElement* m_decoder; GstElement* m_tee; GstPad* m_teeDisplayPad; GstPad* m_teeAppsinkPad; GstElement* m_queue0; GstElement* m_qtioverlay; GstElement* m_display; GstElement* m_queue1; GstElement* m_qtivtrans; GstElement* m_capfilter; GstElement* m_appsink; }; /* gst-launch-1.0 filesrc location=video.mp4 ! qtdemux ! h264parse ! qtivdec ! tee name=t1 t1. ! queue ! qtioverlay meta-color=true ! waylandsink t1. ! queue ! qtivtransform ! video/x-raw,format=BGR,width=1920,height=1080 ! appsink */ ================================================ FILE: application_develop/GstPadProbe/src/VideoPipeline.cpp ================================================ /* * @Description: Implement of VideoPipeline. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:01:39 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-10 08:16:03 */ #include "VideoPipeline.h" static GstPadProbeReturn cb_sync_before_buffer_probe ( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { //LOG_INFO_MSG ("cb_sync_before_buffer_probe called"); VideoPipeline* vp = reinterpret_cast (user_data); GstBuffer* buffer = (GstBuffer*) info->data; return GST_PAD_PROBE_OK; } static GstPadProbeReturn cb_sync_buffer_probe ( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { //LOG_INFO_MSG ("cb_sync_buffer_probe called"); VideoPipeline* vp = reinterpret_cast (user_data); GstBuffer* buffer = (GstBuffer*) info->data; // sync if (info->type & GST_PAD_PROBE_TYPE_BUFFER) { g_mutex_lock (&vp->m_syncMuxtex); g_atomic_int_inc (&vp->m_syncCount); g_cond_signal (&vp->m_syncCondition); g_mutex_unlock (&vp->m_syncMuxtex); } return GST_PAD_PROBE_OK; } static GstPadProbeReturn cb_queue0_probe ( GstPad* pad, GstPadProbeInfo* info, gpointer user_data) { // LOG_INFO_MSG ("cb_queue0_probe called"); VideoPipeline* vp = reinterpret_cast (user_data); GstBuffer* buffer = (GstBuffer*) info->data; // sync if (info->type & GST_PAD_PROBE_TYPE_BUFFER && !vp->isExited) { g_mutex_lock (&vp->m_syncMuxtex); while (g_atomic_int_get (&vp->m_syncCount) <= 0) g_cond_wait (&vp->m_syncCondition, &vp->m_syncMuxtex); if (!g_atomic_int_dec_and_test (&vp->m_syncCount)) { //LOG_INFO_MSG ("m_syncCount:%d/%d", vp->m_syncCount, // vp->pipeline_id_); } g_mutex_unlock (&vp->m_syncMuxtex); } // osd the result if (vp->m_getResultFunc) { const std::shared_ptr result = vp->m_getResultFunc (vp->m_getResultArgs); if (result && vp->m_procDataFunc) { vp->m_procDataFunc (buffer, result); } } // LOG_INFO_MSG ("cb_osd_buffer_probe exited"); return GST_PAD_PROBE_OK; } static GstFlowReturn cb_appsink_new_sample ( GstElement* appsink, gpointer user_data) { // LOG_INFO_MSG ("cb_appsink_new_sample called, user data: %p", user_data); VideoPipeline* vp = reinterpret_cast (user_data); GstSample* sample = NULL; GstBuffer* buffer = NULL; GstMapInfo map; const GstStructure* info = NULL; GstCaps* caps = NULL; int sample_width = 0; int sample_height = 0; g_signal_emit_by_name (appsink, "pull-sample", &sample); if (sample) { buffer = gst_sample_get_buffer (sample); if ( buffer == NULL ) { LOG_ERROR_MSG ("get buffer is null"); goto exit; } gst_buffer_map (buffer, &map, GST_MAP_READ); caps = gst_sample_get_caps (sample); if ( caps == NULL ) { LOG_ERROR_MSG ("get caps is null"); goto exit; } info = gst_caps_get_structure (caps, 0); if ( info == NULL ) { LOG_ERROR_MSG ("get info is null"); goto exit; } // ---- Read frame and convert to opencv format --------------- // convert gstreamer data to OpenCV Mat, you could actually // resolve height / width from caps... gst_structure_get_int (info, "width", &sample_width); gst_structure_get_int (info, "height", &sample_height); // appsink product queue produce { // init a cv::Mat with gst buffer address: deep copy if (map.data == NULL) { LOG_ERROR_MSG("appsink buffer data empty\n"); return GST_FLOW_OK; } cv::Mat img (sample_height, sample_width, CV_8UC3, (unsigned char*)map.data, cv::Mat::AUTO_STEP); img = img.clone(); if (vp->m_putDataFunc) { vp->m_putDataFunc(std::make_shared (img), vp->m_putDataArgs); } else { goto exit; } } } exit: if (buffer) { gst_buffer_unmap (buffer, &map); } if (sample) { gst_sample_unref (sample); } return GST_FLOW_OK; } #ifdef RTSP_SOURCE static void cb_rtspsrc_pad_added ( GstElement *src, GstPad *new_pad, gpointer user_data) { GstPadLinkReturn ret; GstCaps *new_pad_caps = NULL; GstStructure *new_pad_struct = NULL; const gchar *new_pad_type = NULL; VideoPipeline* vp = reinterpret_cast (user_data); GstPad* sink_pad = gst_element_get_static_pad ( reinterpret_cast (vp->m_rtph264depay), "sink"); LOG_INFO_MSG ("Received new pad '%s' from '%s':", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src)); /* Check the new pad's name */ if (!g_str_has_prefix (GST_PAD_NAME(new_pad), "recv_rtp_src_")) { LOG_ERROR_MSG ("It is not the right pad. Need recv_rtp_src_. Ignoring."); goto exit; } /* If our converter is already linked, we have nothing to do here */ if (gst_pad_is_linked(sink_pad)) { LOG_ERROR_MSG (" Sink pad from %s already linked. Ignoring.\n", GST_ELEMENT_NAME (src)); goto exit; } /* Check the new pad's type */ new_pad_caps = gst_pad_query_caps(new_pad, NULL); new_pad_struct = gst_caps_get_structure(new_pad_caps, 0); new_pad_type = gst_structure_get_name(new_pad_struct); /* Attempt the link */ ret = gst_pad_link(new_pad, sink_pad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("Fail to link rtspsrc and rtph264depay"); } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref(new_pad_caps); /* Unreference the sink pad */ gst_object_unref(sink_pad); } #endif #ifdef FILE_SOURCE static void cb_qtdemux_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; VideoPipeline* vp = reinterpret_cast (user_data); GstPad* v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_h264parse), "sink"); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (!g_str_has_prefix (new_pad_type, "video/x-h264")) { LOG_WARN_MSG ("It has type '%s' which is not raw video. Ignoring.", new_pad_type); goto exit; } /* Attempt the link */ ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("Fail to link qtdemux and h264parse"); } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ gst_object_unref (v_sinkpad); } #endif VideoPipeline::VideoPipeline (const VideoPipelineConfig& config) { m_config = config; m_syncCount = 0; isExited = false; m_queue0_probe = -1; m_trans_sink_probe = -1; m_trans_src_probe = -1; g_mutex_init (&m_syncMuxtex); g_cond_init (&m_syncCondition); g_mutex_init (&m_mutex); } VideoPipeline::~VideoPipeline () { Destroy (); } bool VideoPipeline::Create (void) { GstCaps* m_transCaps; GstPad *m_gstPad; if (!(m_gstPipeline = gst_pipeline_new ("video-pipeline"))) { LOG_ERROR_MSG ("Failed to create pipeline named video-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus (GST_PIPELINE (m_gstPipeline), true); #ifdef RTSP_SOURCE if (!(m_source = gst_element_factory_make ("rtspsrc", "src"))) { LOG_ERROR_MSG ("Failed to create element rtspsrc named src"); goto exit; } g_object_set (G_OBJECT (m_source), "location", m_config.src.c_str(), NULL); g_signal_connect(GST_OBJECT (m_source), "pad-added", G_CALLBACK(cb_rtspsrc_pad_added), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_gstPipeline), m_source, NULL); if (!(m_rtph264depay = gst_element_factory_make ("rtph264depay", "depay"))) { LOG_ERROR_MSG ("Failed to create element rtph264depay named depay"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_rtph264depay, NULL); #endif #ifdef FILE_SOURCE if (!(m_source = gst_element_factory_make ("filesrc", "src"))) { LOG_ERROR_MSG ("Failed to create element filesrc named src"); goto exit; } g_object_set (G_OBJECT (m_source), "location", m_config.src.c_str(), NULL); gst_bin_add_many (GST_BIN (m_gstPipeline), m_source, NULL); if (!(m_qtdemux = gst_element_factory_make ("qtdemux", "demux"))) { LOG_ERROR_MSG ("Failed to create element qtdemux named demux"); goto exit; } // Link qtdemux with h264parse g_signal_connect (m_qtdemux, "pad-added", G_CALLBACK(cb_qtdemux_pad_added), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_gstPipeline), m_qtdemux, NULL); if (!gst_element_link_many (m_source, m_qtdemux, NULL)) { LOG_ERROR_MSG ("Failed to link filesrc->qtdemux"); goto exit; } #endif if (!(m_h264parse = gst_element_factory_make ("h264parse", "parse"))) { LOG_ERROR_MSG ("Failed to create element h264parse named parse"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_h264parse, NULL); if (!(m_decoder = gst_element_factory_make ("qtivdec", "decode"))) { LOG_ERROR_MSG ("Failed to create element qtivdec named decode"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_decoder, NULL); if (!(m_tee = gst_element_factory_make ("tee", "t1"))) { LOG_ERROR_MSG ("Failed to create element tee named t1"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_tee, NULL); if (!(m_queue0 = gst_element_factory_make ("queue", "queue0"))) { LOG_ERROR_MSG ("Failed to create element queue named queue0"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_queue0, NULL); // add probe to queue0 m_gstPad = gst_element_get_static_pad (m_queue0, "src"); m_queue0_probe = gst_pad_add_probe (m_gstPad, (GstPadProbeType) ( GST_PAD_PROBE_TYPE_BUFFER), cb_queue0_probe, reinterpret_cast (this), NULL); gst_object_unref (m_gstPad); if (!(m_qtioverlay = gst_element_factory_make ("qtioverlay", "overlay"))) { LOG_ERROR_MSG ("Failed to create element qtioverlay named overlay"); goto exit; } g_object_set (G_OBJECT (m_qtioverlay), "meta-color", true, NULL); gst_bin_add_many (GST_BIN (m_gstPipeline), m_qtioverlay, NULL); if (!(m_display = gst_element_factory_make ("waylandsink", "display"))) { LOG_ERROR_MSG ("Failed to create element waylandsink named display"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_display, NULL); #ifdef RTSP_SOURCE if (!gst_element_link_many (m_rtph264depay, m_h264parse, m_decoder, m_tee, m_queue0, m_qtioverlay, m_display, NULL)) { LOG_ERROR_MSG ("Failed to link rtph264depay->h264parse->qtivdec" "->tee->queue0->qtioverlay->waylandsink"); goto exit; } #endif #ifdef FILE_SOURCE if (!gst_element_link_many (m_h264parse, m_decoder, m_tee, m_queue0, m_qtioverlay, m_display, NULL)) { LOG_ERROR_MSG ("Failed to link h264parse->qtivdec" "->tee->queue0->qtioverlay->waylandsink"); goto exit; } #endif if (!(m_queue1 = gst_element_factory_make ("queue", "queue1"))) { LOG_ERROR_MSG ("Failed to create element queue named queue1"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_queue1, NULL); if (!(m_qtivtrans = gst_element_factory_make ("qtivtransform", "transform"))) { LOG_ERROR_MSG ("Failed to create element qtivtransform named transform"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_qtivtrans, NULL); m_transCaps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.conv_format.c_str(), "width", G_TYPE_INT, m_config.conv_width, "height", G_TYPE_INT, m_config.conv_height, NULL); if (!(m_capfilter = gst_element_factory_make("capsfilter", "capfilter"))) { LOG_ERROR_MSG ("Failed to create element capsfilter named capfilter"); goto exit; } g_object_set (G_OBJECT(m_capfilter), "caps", m_transCaps, NULL); gst_caps_unref (m_transCaps); gst_bin_add_many (GST_BIN (m_gstPipeline), m_capfilter, NULL); m_gstPad = gst_element_get_static_pad (m_qtivtrans, "sink"); m_trans_sink_probe = gst_pad_add_probe (m_gstPad, (GstPadProbeType) ( GST_PAD_PROBE_TYPE_BUFFER), cb_sync_before_buffer_probe, reinterpret_cast (this), NULL); gst_object_unref (m_gstPad); m_gstPad = gst_element_get_static_pad (m_qtivtrans, "src"); m_trans_src_probe = gst_pad_add_probe (m_gstPad, (GstPadProbeType) ( GST_PAD_PROBE_TYPE_BUFFER), cb_sync_buffer_probe, reinterpret_cast (this), NULL); gst_object_unref (m_gstPad); if (!(m_appsink = gst_element_factory_make ("appsink", "appsink"))) { LOG_ERROR_MSG ("Failed to create element appsink named appsink"); goto exit; } g_object_set (m_appsink, "emit-signals", TRUE, NULL); g_signal_connect (m_appsink, "new-sample", G_CALLBACK (cb_appsink_new_sample), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_gstPipeline), m_appsink, NULL); if (!gst_element_link_many (m_tee, m_queue1, m_qtivtrans, m_capfilter, m_appsink, NULL)) { LOG_ERROR_MSG ("Failed to link tee->queue1->" "qtivtransform->capfilter->appsink"); goto exit; } return true; exit: LOG_ERROR_MSG ("Failed to create video pipeline"); return false; } bool VideoPipeline::Start (void) { if (GST_STATE_CHANGE_FAILURE == gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING)) { LOG_ERROR_MSG ("Failed to set pipeline to playing state"); return false; } return true; } bool VideoPipeline::Pause (void) { GstState state, pending; LOG_INFO_MSG ("StopPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state (m_gstPipeline, GST_STATE_PAUSED); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } bool VideoPipeline::Resume (void) { GstState state, pending; LOG_INFO_MSG ("StartPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } void VideoPipeline::Destroy (void) { GstPad* teeSrcPad; while (teeSrcPad = gst_element_get_request_pad (m_tee, "src_%u")) { gst_element_release_request_pad (m_tee, teeSrcPad); g_object_unref (teeSrcPad); } if (m_gstPipeline) { isExited = true; g_mutex_lock (&m_syncMuxtex); g_atomic_int_inc (&m_syncCount); g_cond_signal (&m_syncCondition); g_mutex_unlock (&m_syncMuxtex); gst_element_set_state (m_gstPipeline, GST_STATE_NULL); gst_object_unref (m_gstPipeline); m_gstPipeline = NULL; } if (m_trans_src_probe != -1 && m_queue0) { GstPad *gstpad = gst_element_get_static_pad (m_qtivtrans, "sink"); if (!gstpad) { LOG_ERROR_MSG ("Could not find '%s' in '%s'", "src", GST_ELEMENT_NAME(m_qtivtrans)); } gst_pad_remove_probe(gstpad, m_trans_src_probe); gst_object_unref (gstpad); m_trans_src_probe = -1; } if (m_trans_src_probe != -1 && m_queue0) { GstPad *gstpad = gst_element_get_static_pad (m_qtivtrans, "src"); if (!gstpad) { LOG_ERROR_MSG ("Could not find '%s' in '%s'", "src", GST_ELEMENT_NAME(m_qtivtrans)); } gst_pad_remove_probe(gstpad, m_trans_src_probe); gst_object_unref (gstpad); m_trans_src_probe = -1; } if (m_queue0_probe != -1 && m_queue0) { GstPad *gstpad = gst_element_get_static_pad (m_queue0, "src"); if (!gstpad) { LOG_ERROR_MSG ("Could not find '%s' in '%s'", "src", GST_ELEMENT_NAME(m_queue0)); } gst_pad_remove_probe(gstpad, m_queue0_probe); gst_object_unref (gstpad); m_queue0_probe = -1; } g_mutex_clear (&m_mutex); g_mutex_clear (&m_syncMuxtex); g_cond_clear (&m_syncCondition); } void VideoPipeline::SetCallbacks (SinkPutDataFunc func, void* args) { LOG_INFO_MSG ("set pudata callback called"); m_putDataFunc = func; m_putDataArgs = args; } void VideoPipeline::SetCallbacks (ProbeGetResultFunc func, void* args) { LOG_INFO_MSG ("set getdata callback called"); m_getResultFunc = func; m_getResultArgs = args; } void VideoPipeline::SetCallbacks (ProcDataFunc func, void* args) { LOG_INFO_MSG ("set procdata callback called"); m_procDataFunc = func; m_procDataArgs = args; } ================================================ FILE: application_develop/GstPadProbe/src/main.cpp ================================================ /* * @Description: Test Program. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 09:17:16 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-10 03:27:11 */ #include #include #include #include #include #include "VideoPipeline.h" #include "DoubleBufferCache.h" static GMainLoop* g_main_loop = NULL; static bool validateSrcUri (const char* name, const std::string& value) { if (!value.compare("")) { LOG_ERROR_MSG ("Source Uri required!"); return false; } // for absolute path std::size_t pos = value.find("//"); if (pos != std::string::npos) { std::string uri_type = value.substr(0, pos - 1); std::string uri_path = value.substr(pos); if (!uri_type.compare ("file:")) { // make sure file exist. struct stat statbuf; if (!stat(uri_path.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.substr(pos).c_str()); return true; } } else { return true; } } // for relative path struct stat statbuf; if (!stat(value.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.c_str()); return true; } LOG_ERROR_MSG ("Invalid config file."); return false; } DEFINE_string (srcuri, "", "algorithm library with APIs: alg{Init/Proc/Ctrl/Fina}"); DEFINE_validator (srcuri, &validateSrcUri); void putData (std::shared_ptr img, void* user_data) { // LOG_INFO_MSG ("putData called"); DoubleBufCache* db = reinterpret_cast*> (user_data); std::string sinkwords ("appsink"); cv::Point fontpos= cv::Point (100, 115); cv::Scalar fongcolor(150, 255, 40); cv::putText(*img, sinkwords, fontpos, cv::FONT_HERSHEY_COMPLEX, 0.8, fongcolor, 2, 0.3); cv::Rect rect (100, 100, 1720, 880); cv::Scalar rectcolor(0, 200, 0); cv::rectangle (*img, rect, rectcolor, 3); db->feed(img); } std::shared_ptr getData (void* user_data) { // LOG_INFO_MSG ("getData called"); DoubleBufCache* db = reinterpret_cast*> (user_data); std::shared_ptr img; img = db->fetch(); std::string srcwords ("appsrc"); cv::Point fontpos= cv::Point (1700, 970); cv::Scalar fongcolor(150, 255, 40); cv::putText(*img, srcwords, fontpos, cv::FONT_HERSHEY_COMPLEX, 0.8, fongcolor, 2, 0.3); cv::Rect rect (110, 110, 1720, 880); cv::Scalar rectcolor(0, 0, 200); cv::rectangle (*img, rect, rectcolor, 3); return img; } std::shared_ptr getResult(void* user_data) { // LOG_INFO_MSG ("getResult called"); cv::Rect rect (110, 110, 1720, 880); return std::make_shared (rect); } // draw rectangle and text on NV12 with qtioverlay meta data. void procData(GstBuffer* buffer, const std::shared_ptr& rect) { // LOG_INFO_MSG ("procData called"); std::string osd_text ("queue0_probe"); GstMLDetectionMeta* meta = gst_buffer_add_detection_meta(buffer); if (!meta) { LOG_ERROR_MSG ("Failed to create metadata"); return ; } GstMLClassificationResult *box_info = (GstMLClassificationResult*)malloc( sizeof(GstMLClassificationResult)); uint32_t label_size = osd_text.size() + 1; box_info->name = (char *)malloc(label_size); snprintf(box_info->name, label_size, "%s", osd_text.c_str()); meta->box_info = g_slist_append (meta->box_info, box_info); meta->bbox_color = (200 << 24) + (0 << 16) + (0 << 8) + 0xFF; meta->bounding_box.x = rect->x; meta->bounding_box.y = rect->y; meta->bounding_box.width = rect->width; meta->bounding_box.height = rect->height; } int main(int argc, char* argv[]) { google::ParseCommandLineFlags (&argc, &argv, true); VideoPipelineConfig m_vpConfig; VideoPipeline* m_vp; SinkPutDataFunc m_putDataFunc; ProbeGetResultFunc m_getResultFunc; ProcDataFunc m_procDataFunc; DoubleBufCache* m_dataBufferCache; DoubleBufCache* m_resultBufferCache; gst_init(&argc, &argv); if (!(g_main_loop = g_main_loop_new (NULL, FALSE))) { LOG_ERROR_MSG ("Failed to new a object with type GMainLoop"); goto exit; } m_vpConfig.src = FLAGS_srcuri; m_vpConfig.conv_format = "BGR"; m_vpConfig.conv_width = 960; m_vpConfig.conv_height = 540; m_vp = new VideoPipeline(m_vpConfig); m_putDataFunc = std::bind(putData, std::placeholders::_1, std::placeholders::_2); m_getResultFunc = std::bind(getResult, std::placeholders::_1); m_procDataFunc = std::bind(procData, std::placeholders::_1, std::placeholders::_2); m_dataBufferCache = new DoubleBufCache (); m_resultBufferCache = new DoubleBufCache (); m_vp->SetCallbacks (m_putDataFunc, m_dataBufferCache); m_vp->SetCallbacks (m_getResultFunc, m_resultBufferCache); m_vp->SetCallbacks (m_procDataFunc, NULL); if (!m_vp->Create()) { LOG_ERROR_MSG ("Pipeline Create failed."); goto exit; } m_vp->Start(); g_main_loop_run (g_main_loop); exit: if (g_main_loop) g_main_loop_unref (g_main_loop); if (m_vp) { m_vp->Destroy(); delete m_vp; m_vp = NULL; } if (m_dataBufferCache) { delete m_dataBufferCache; m_dataBufferCache = NULL; } if (m_resultBufferCache) { delete m_resultBufferCache; m_dataBufferCache = NULL; } google::ShutDownCommandLineFlags (); return 0; } ================================================ FILE: application_develop/README.md ================================================ # Application Development [![](https://img.shields.io/badge/Author-@RucardoLu-red.svg)](https://github.com/gesanqiu)![](https://img.shields.io/badge/Version-1.0.0-blue.svg)[![](https://img.shields.io/badge/license-GPL-000000.svg)](https://opensource.org/licenses/GPL-3.0/) ## 概述 GStreamer作为一个音视频应用开发框架,提供了一个快速开发工具`gst-launch-1.0`,开发人员能够将现有的Pulgins以一定规则任意组合成一条Pipeline并运行起来。但这显然不能满足更高级的开发需求,对于开发人员来说我们往往需要对音视频的源数据进行操作,操作单位至少是一帧图片或一段音频,事实上这些数据就在Pipeline中以Stream的形式在各个Plugin之间传递,而为了能够操作这些数据,我们需要更高的访问权限,更细的控制粒度。 本章节旨在展示一个基于GStreamer框架的简单应用是如何被开发出来的,以及我们能够实现的功能。 本章节代码仓库:[application-develop](https://github.com/gesanqiu/gstreamer-example/tree/main/application_develop) 章节内容: - 构建pipeline的两种方式:`gst_parse_launch()`和`gst_element_factory_make()` - uridecodebin - appsink/appsrc - GstBufferPool - GstPadProbe - 自定义plugin ## 开发平台 - 开发平台:Qualcomm® QRB5165 (Linux-Ubuntu 18.04) - 图形界面:Weston(Wayland) - 开发框架:GStreamer, OpenCV - 第三方库:gflags,json-glib-1.0,glib-2.0 - 构建工具:[CMake](https://ricardolu.gitbook.io/trantor/cmake-in-action) ================================================ FILE: application_develop/app/CMakeLists.txt ================================================ # created by Ricardo Lu in 08/29/2021 cmake_minimum_required(VERSION 3.10) project(app) set(CMAKE_CXX_STANDARD 11) set(OpenCV_DIR "/opt/thundersoft/opencv-4.2.0/lib/cmake/opencv4") find_package(OpenCV REQUIRED) include(FindPkgConfig) pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) pkg_check_modules(GLIB REQUIRED glib-2.0) pkg_check_modules(GFLAGS REQUIRED gflags) include_directories( ${PROJECT_SOURCE_DIR}/inc ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ${GLIB_INCLUDE_DIRS} ${GFLAGS_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ${GLIB_LIBRARY_DIRS} ${GFLAGS_LIBRARY_DIRS} ${OpenCV_LIBRARY_DIRS} ) add_executable(${PROJECT_NAME} src/appsink.cpp src/appsrc.cpp src/main.cpp ) target_link_libraries(${PROJECT_NAME} ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ${OpenCV_LIBRARIES} ) ================================================ FILE: application_develop/app/README.md ================================================ # app 为了完成应用程序与GStreamer Pipeline的数据交互,GStreamer提供了两个插件: - [GstAppSink](https://gstreamer.freedesktop.org/documentation/applib/gstappsink.html) – 应用程序从管道中提取GstSample的简便方法 - [GstAppSrc](https://gstreamer.freedesktop.org/documentation/applib/gstappsrc.html) – 应用程序向管道中注入GstBuffer的简单方法 本教程是这两个插件的应用实例,**教程地址:[GStreamer-app](https://ricardolu.gitbook.io/gstreamer/application-development/app)** ## Build & Run ```shell cmake -H. -Bbuild/ cd build make ./app --srcuri ../video.mp4 # you will see one green rectangle named appsink # and one red rectangle named appsrc on your video ``` ================================================ FILE: application_develop/app/doc/app.md ================================================ # App 为了完成应用程序与GStreamer Pipeline的数据交互,GStreamer提供了两个插件: - [GstAppSink](https://gstreamer.freedesktop.org/documentation/applib/gstappsink.html) – 应用程序从管道中提取GstSample的简便方法 - [GstAppSrc](https://gstreamer.freedesktop.org/documentation/applib/gstappsrc.html) – 应用程序向管道中注入GstBuffer的简单方法 - **Github:[gstreamer-app](https://github.com/gesanqiu/gstreamer-example/tree/main/application_develop/app)** ================================================ FILE: application_develop/app/doc/appsink.md ================================================ # Appsink appsink是一个sink插件,具有多种方法允许应用程序获取Pipeline的数据句柄。与大部分插件不同,除了Action signals的方式以外,appsink还提供了一系列的外部接口`gst_app_sink_()`用于数据交互以及appsink属性的动态设置(需要链接`libgstapp.so`)。 ## Properties #### emit-signals appsink的`emit-signals`属性默认为false,假设需要发送`new-preroll`和`new-sample`信号,需要将其设置为true。 ### caps cpas属性用于设置Appsink可以接收的数据格式,但和`appsrc`必须要设置caps属性以便后续和plugin的链接不同,appsink的caps属性为可选项,因为appsink处理的数据单元为GstSample,可以通过`gst_sample_get_caps()`直接从GstSample中获取到其下的GstCaps。 ## signals - `eos`:流结束信号,由stream线程发出。 - `new-preroll`:preroll sample可用信号,只有当`emit-signals`属性为`true`时才会由stream线程发出。 - `preroll`:一个sink元素当且仅当有一个buffer进入pad之后才能完成`PAUSED`状态的转变,这个过程叫做`preroll`。为了能够尽快完成向`PLAYING`状态的转变,避免给用户造成视觉上的延迟,向pipeline中填充buffer(Preroll)是有很有必要的。Preroll在音视频同步方面是非常关键的,确保不会有buffer被sink元素抛弃。 - `new-sample`:新的sample可用信号,只有当`emit-signals`属性为`true`时才会由stream线程发出。 ## GST_APP_API ### GstAppSinkCallbacks ```c typedef struct { void (*eos) (GstAppSink *appsink, gpointer user_data); GstFlowReturn (*new_preroll) (GstAppSink *appsink, gpointer user_data); GstFlowReturn (*new_sample) (GstAppSink *appsink, gpointer user_data); /*< private >*/ gpointer _gst_reserved[GST_PADDING]; } GstAppSinkCallbacks; ``` - `*eos`:`eos`信号触发的回调函数指针 - `*new_preroll`:`new_preroll`信号触发的回调函数指针 - `*new_sample`:`new_sample`信号触发的回调函数指针 - `*user_data`:用户向回调函数传递的数据。 ### pull-sample 通常开发者可以使用`gst_app_sink_pull_sample()`和`gst_app_sink_pull_preroll()`来获取appsink中的GstSample, 这两个方法将block线程直到appsink中获取到可用的GstSample或者Pipeline停止播放(end-of-stream),同时还提供了timeout版本:`gst_app_sink_try_pull_sample`和`gst_app_sink_try_pull_preroll`。 - `gst_app_sink_pull_sample` ```c GstSample * gst_app_sink_pull_sample (GstAppSink * appsink) ``` - `gst_app_sink_try_pull_sample` ```c GstSample * gst_app_sink_try_pull_sample (GstAppSink * appsink, GstClockTime timeout) ``` ​ **注:**appsink内部使用一个队列来保存来自stream线程输出的buffer,假如应用程序pull-sample的速度不够快,那么队列将占用越来越多的内存,通常建议使用`max-buffers`属性设置内部队列长度,同时配合`drop`属性用于设置内部队列在队满时是丢帧或者block来避免内存泄露。 ## Action signals - `pull-sample`: ```c g_signal_emit_by_name (self, "pull-sample", user_data, &ret); ``` - 将阻塞线程,直到获取到一个可用的GstSample,或收到EOS信号或者appsink插件状态变为`READY`或`NULL`。 - 只有在appsink处于`PLAYING`状态下才会返回GstSample到user_data,所有新到达的GstSample都会加入appsink的内部队列,因此应用程序可以根据自己的需求以一定的速度来pull sample,但加入消耗速度不够快将造成大量的内存开销。 - `pull-preroll` ```c g_signal_emit_by_name (self, "pull-preroll", user_data, &ret); ``` - 获取Appsink的最后一个preroll sample,即使得appsink变为`PAUSED`状态的sample。 **注:**假设`pull-sample`或`pull-preroll`操作返回的GstSample为空,那么appsink处于停止或者EOS状态,可以使用`gst_app_sink_is_eos()`进行查看。 ## 代码实例 ```c++ // 使用GST_APP_API和Action signal的方式 void CreatePipeline() { // ... if (!(m_appsink = gst_element_factory_make ("appsink", "appsink"))) { LOG_ERROR_MSG ("Failed to create element appsink named appsink"); goto exit; } // equals to gst_app_sink_set_emit_signals (GST_APP_SINK_CAST (m_appsink), true); g_object_set (m_appsink, "emit-signals", TRUE, NULL); // full definition of appsink callbacks /* GstAppSinkCallbacks callbacks = {cb_appsink_eos, cb_appsink_new_preroll, cb_appsink_new_sample}; gst_app_sink_set_callbacks (GST_APP_SINK_CAST (m_appsink), &callbacks, reinterpret_cast (this), NULL); */ g_signal_connect (m_appsink, "new-sample", G_CALLBACK (cb_appsink_new_sample), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_sinkPipeline), m_appsink, NULL); //... } GstFlowReturn cb_appsink_new_sample ( GstElement* appsink, gpointer user_data) { // LOG_INFO_MSG ("cb_appsink_new_sample called, user data: %p", user_data); SinkPipeline* sp = reinterpret_cast (user_data); GstSample* sample = NULL; GstBuffer* buffer = NULL; GstMapInfo map; const GstStructure* info = NULL; GstCaps* caps = NULL; GstFlowReturn ret = GST_FLOW_OK; int sample_width = 0; int sample_height = 0; // equals to gst_app_sink_pull_sample (GST_APP_SINK_CAST (appsink), sample); g_signal_emit_by_name (appsink, "pull-sample", &sample, &ret); if (ret != GST_FLOW_OK) { LOG_ERROR_MSG ("can't pull GstSample."); return ret; } if (sample) { buffer = gst_sample_get_buffer (sample); if ( buffer == NULL ) { LOG_ERROR_MSG ("get buffer is null"); goto exit; } gst_buffer_map (buffer, &map, GST_MAP_READ); caps = gst_sample_get_caps (sample); if ( caps == NULL ) { LOG_ERROR_MSG ("get caps is null"); goto exit; } info = gst_caps_get_structure (caps, 0); if ( info == NULL ) { LOG_ERROR_MSG ("get info is null"); goto exit; } // -------- Read frame and convert to opencv format -------- // convert gstreamer data to OpenCV Mat, you could actually // resolve height / width from caps... gst_structure_get_int (info, "width", &sample_width); gst_structure_get_int (info, "height", &sample_height); // customized user action { // init a cv::Mat with gst buffer address: deep copy // sometime you may got a empty buffer if (map.data == NULL) { LOG_ERROR_MSG("appsink buffer data empty\n"); return GST_FLOW_OK; } cv::Mat img (sample_height, sample_width, CV_8UC3, (unsigned char*)map.data, cv::Mat::AUTO_STEP); img = img.clone(); // redirection outside operation: for decoupling use if (sp->m_putDataFunc) { sp->m_putDataFunc(std::make_shared (img), sp->m_putDataArgs); } else { goto exit; } } } exit: if (buffer) { gst_buffer_unmap (buffer, &map); } if (sample) { gst_sample_unref (sample); } return GST_FLOW_OK; } ``` ### customized user action ```c++ { cv::Mat img (sample_height, sample_width, CV_8UC3, (unsigned char*)map.data, cv::Mat::AUTO_STEP); // deep copy img = img.clone(); } ``` 在示例代码中有以上这段,为了在每一帧图像上画框,我使用了OpenCV的接口,因此需要将GstBuffer中的数据转化为`cv::Mat`。`GstBuffer`的数据存放的真实地址由相关的`GstMapInfo`管理,我使用映射的地址`map.data`来构造了一个`cv::Mat`对象。这次构造是浅拷贝,但在这之后我使用`cv::Mat.clone()`方法做了一次深拷贝,这次深拷贝的原因是在映射`gst_buffer_map (buffer, &map, GST_MAP_READ);`时我只申请了`READ`权限。 为什么不申请`WRITE`权限呢,是因为在实际使用过程中发现一旦我同时申请`WRITE`权限,程序终端将会输出一个报错:尝试向一个不可写的Buffer申请写权限,并且我拿到的`GstBuffer`是一个空的buffer,其下的数据也为空。 ```c gboolean gst_buffer_map (GstBuffer * buffer, GstMapInfo * info, GstMapFlags flags) ``` 查看`gst_buffer_map ()`的API说明可以知道,当你请求映射的buffer是可写的但memory是不可写的时候,将自动生成并返回一个可写的拷贝,同时用这个拷贝替换掉只读的buffer。 buffer可写但memory不可写的和硬解码相关,硬解码需要相应的硬件配合,对这类memory的读写通常需要通过相关的驱动接口,在示例代码的pipeline中我使用了高通平台下的硬件解码器`qtivdec`,底层依赖于ION,关于ION的相关资料可以参考文章[ION Memory Control](https://ricardolu.gitbook.io/trantor/ion-memory-control),文章描述了ION Buffer的使用方法,简单来说就是需要我们拿到ION Buffer的句柄,然后通过`mmap`将这块ION Memory映射到用户空间,才能对其进行操作。但是这个ION Buffer是在`qtivdec`插件中申请的,因此假如想要拿到它的句柄需要修改`qtivdec`的源码,维护这个句柄的生存周期并在sink这个buffer的时候将其加入`GstBuffer`的`GstStruture`结构中。 **注:**因为我目前基于高通平台开发,为了性能我在插件的选择上会尽可能选择具备硬件加速的插件,这就为程序引入了不可控因素,但在一定程度上是值得的。虽然这是个教学文档,但同时也是我的学习过程,因此我会将我在开发过程中遇到的一些问题和解决思路记录下来,供大家参考。 ### 资源释放 样例代码由于进行了深拷贝并且将cv::Mat对象交给智能指针来管理,因此我们可以在回调完成之后手动释放相关的GstSample,释放GstSample的同时会自动释放其下的GstBuffer因此我们只用解除GstBuffer的映射。 在通常开发中完全可以appsink直接输出GstSample或者GstBuffer,并在不需要的时候再释放,以实现零拷贝。 ================================================ FILE: application_develop/app/doc/appsrc.md ================================================ # Appsrc 应用程序可以通过Appsrc插件向GStreamer pipeline中插入数据,与大部分插件不同,除了Action signals的方式以外,appsrc还提供了一系列的外部接口`gst_app_src_()`用于数据交互以及appsrc属性的动态设置(需要链接`libgstapp.so`)。 ## Properties #### emit-signals appsrc的`emit-signals`属性默认为true。 ### caps 除非push的buffer具有未知的caps,使用appsrc: - 需要设置caps属性,指定我们会产生何种类型的数据,这样GStreamer会在连接阶段检查后续的Element是否支持此数据类型,否则回调只触发一次就被block在其他插件中。这里的 caps必须为GstCaps对象。 - 使用`gst_app_src_push_sample()`直接push sample,然后接口将接管这个GstSample的控制权(自动释放),并获取这个GstSample的GstCaps作为caps。 ### max-buffers/max-bytes/max-time & block appsrc内部维护一个数据队列,`max-buffers/max-bytes/max-time`这几个属性用于控制这个内部队列的长度。一个填满的队列将发送`enough-data`信号,这时应用程序应该停止向队列push data。 假如`block`属性设置为true,当内部队列为满时将block push-buffer相关方法直到队列不满。 ### stream-type ### is-live ## signals - `enough-data `:appsrc内部数据队列满,推荐在触发信号之后停止`push-buffer`直到need-data信号被触发。 - `need-data`:appsrc需要更多的数据,在回调或者其他线程中需要调用`push-buffer`或者`end-of-stream`,回调函数的`length`参数是一个隐藏参数,当`length=-1`时意味着appsrc可以接收任意bytes的buffer。可以重复调用`push-buffer`直至`enough-data`信号被触发。 - `seekdata`:需要seekable stream-type的支持,具有一个offset表明下一个要被push的buffer的位置。 ### 两种工作模式 - `push-mode` push模式由应用程序来控制data的发送,应用程序重复调用`push-buffer/push-sample`方法来触发`enough-data`信号。配合`max-buffers`属性设置队列的长度,通过处理`enough-data`信号和`need-data`信号分别停止或开始调用`push-buffer/push-sample`来控制队列的大小。 - `pull-mode` pull模式和通过`need-data`信号触发`push-buffer`调用。 ## GST_APP_API ### GstAppSrcCallbacks ```c typedef struct { void (*need_data) (GstAppSrc *src, guint length, gpointer user_data); void (*enough_data) (GstAppSrc *src, gpointer user_data); gboolean (*seek_data) (GstAppSrc *src, guint64 offset, gpointer user_data); /*< private >*/ gpointer _gst_reserved[GST_PADDING]; } GstAppSrcCallbacks; ``` ### push-buffer - `gst_app_src_push_buffer` ```c GstFlowReturn gst_app_src_push_buffer (GstAppSrc * appsrc, GstBuffer * buffer) ``` - push-buffer只负责将数据插入appsrc的内部队列中,不负责这个buffer的传输。 - API将接管这个GstBuffer的所有权,自动释放资源。 ### Action signals - `end-of-stream`:appsrc没有可用的buffer信号 - `push-buffer`: ```c g_signal_emit_by_name (self, "push-buffer", buffer, user_data, &ret); ``` - 将GstBuffer添加到appsrc的src pad中,不持有这个GstBuffer的所有权,需要手动释放。 - `push-sample`: ```c g_signal_emit_by_name (self, "push-sample", sample, user_data, &ret); ``` - 将GstSample下的GstBuffer添加到appsrc的src pad中,加入GstSample的GstCaps不符合当前appsrc的cpas,那么将同时把GstSample的GstCpas设置为appsrc的caps属性。不持有这个GstSample的所有权,需要手动释放。 ## 代码实例 ```c++ // 使用GST_APP_API和Action signal的方式 void CreatePipeline() { // ... if (!(m_appsrc = gst_element_factory_make ("appsrc", "appsrc"))) { LOG_ERROR_MSG ("Failed to create element appsrc named appsrc"); goto exit; } m_transCaps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.src_format.c_str(), "width", G_TYPE_INT, m_config.src_width, "height", G_TYPE_INT, m_config.src_height, NULL); // equals to gst_app_src_set_caps (GST_APP_SRC_CAST (m_appsrc), m_transCaps); g_object_set (G_OBJECT(m_appsrc), "caps", m_transCaps, NULL); gst_caps_unref (m_transCaps); // equals to gst_app_src_set_stream_type (GST_APP_SRC_CAST (m_appsrc), // GST_APP_STREAM_TYPE_STREAM); g_object_set (G_OBJECT(m_appsrc), "stream-type", GST_APP_STREAM_TYPE_STREAM, NULL); g_object_set (G_OBJECT(m_appsrc), "is-live", true, NULL); // full definition of appsrc callbacks /* GstAppSrcCallbacks callbacks = {cb_appsrc_need_data, cb_appsrc_enough_data, cb_appsrc_seek_data}; gst_app_src_set_callbacks (GST_APP_SRC_CAST (m_appsrc), &callbacks, reinterpret_cast (this), NULL); */ g_signal_connect (m_appsrc, "need-data", G_CALLBACK (cb_appsrc_need_data), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_srcPipeline), m_appsrc, NULL); // ... } GstFlowReturn cb_appsrc_need_data ( GstElement* appsrc, guint length, gpointer user_data) { // LOG_INFO_MSG ("cb_appsrc_need_data called, user_data: %p", user_data); SrcPipeline* sp = reinterpret_cast (user_data); GstBuffer* buffer; GstMapInfo map; GstFlowReturn ret = GST_FLOW_OK; std::shared_ptr img; if (sp->m_getDataFunc) { img = sp->m_getDataFunc (sp->m_getDataArgs); int len = img->total() * img->elemSize(); // zero-copy GstBuffer // buffer = gst_buffer_new_wrapped(img->data, len); buffer = gst_buffer_new_allocate (NULL, len, NULL); gst_buffer_map(buffer,&map,GST_MAP_READ); memcpy(map.data, img->data, len); GST_BUFFER_PTS (buffer) = sp->m_timestamp; GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 25); sp->m_timestamp += GST_BUFFER_DURATION (buffer) ; // equals to gst_app_src_push_buffer (GST_APP_SRC_CAST (appsrc), buffer); g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret); gst_buffer_unmap(buffer, &map); gst_buffer_unref (buffer); if (ret != GST_FLOW_OK) { /* something wrong, stop pushing */ LOG_ERROR_MSG ("push-buffer fail"); } } // usleep (25 * 1000); return ret; } ``` ================================================ FILE: application_develop/app/inc/Common.h ================================================ /* * @Description: Common Utils. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:24:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 12:10:16 */ #pragma once #include #include #include #include #include #include #include #include #define LOG_ERROR_MSG(msg, ...) \ g_print("** ERROR: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_INFO_MSG(msg, ...) \ g_print("** INFO: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_WARN_MSG(msg, ...) \ g_print("** WARN: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) typedef std::function, void*)> SinkPutDataFunc; typedef std::function(void*)> SrcGetDataFunc; ================================================ FILE: application_develop/app/inc/DoubleBufferCache.h ================================================ /* * @Description: Double Buffer Cache Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-29 08:51:01 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 12:36:32 */ #pragma once #include #include #include #include /** @brief Shared-buffer cache manager. * * */ template class DoubleBufCache { public: /** @brief constructor * @param[in] notify_func When a new buffer is fed, it triggers the function handle. * */ DoubleBufCache(std::function notify_func = std::function{nullptr}) noexcept : swap_ready(false) { this->notify_func = notify_func; } /** @brief deconstructor * */ ~DoubleBufCache() noexcept { if (!debug_info.empty() ) { printf("DoubleBufCache %s destroyed.", debug_info.c_str()); } } /** @brief Put the latest buffer into cache queue to be processed. * * Giving up control of previous front buffer. * @param[in] The latest buffer. * */ void feed(std::shared_ptr pending) { if (nullptr == pending.get()) { throw "ERROR: feed an empty buffer to DoubleBufCache"; } swap_mtx.lock(); front_sp = pending; swap_mtx.unlock(); swap_ready = true; if (notify_func) { notify_func(); } return; } /** @brief Get the front buffer. * @return Front buffer. * */ std::shared_ptr front() noexcept { return front_sp; } /** @brief Fetch the shared back buffer. * @return Back buffer. * */ std::shared_ptr fetch() noexcept { if (swap_ready) { swap_mtx.lock(); back_sp = front_sp; swap_mtx.unlock(); swap_ready = false; } return back_sp; } private: //! Notification function will be called, if a new buffer fed. std::function notify_func; //! The buffer cache can be swapped if the flag is equal to true. std::atomic swap_ready; //! Swapping mutex lock for thread safety. std::mutex swap_mtx; //! Front buffer for previous results saving. std::shared_ptr front_sp; //! Back buffer to be fetched. std::shared_ptr back_sp; public: //! Indicate the name of an instantiated object for debug. std::string debug_info; }; ================================================ FILE: application_develop/app/inc/appsink.h ================================================ /* * @Description: Appsink Pipeline Header. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 10:05:59 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-01 11:53:47 */ #pragma once #include "Common.h" typedef struct _SinkPipelineConfig { std::string src; /*-------------qtivtransform-------------*/ std::string conv_format; int conv_width; int conv_height; }SinkPipelineConfig; class SinkPipeline { public: SinkPipeline (const SinkPipelineConfig& config); bool Create (void); bool Start (void); bool Pause (void); bool Resume (void); void Destroy (void); void SetCallbacks (SinkPutDataFunc func, void* args); ~SinkPipeline (void); public: SinkPutDataFunc m_putDataFunc; void* m_putDataArgs; SinkPipelineConfig m_config; GstElement* m_sinkPipeline; GstElement* m_source; GstElement* m_qtdemux; GstElement* m_h264parse; GstElement* m_decoder; GstElement* m_qtivtrans; GstElement* m_capfilter; GstElement* m_appsink; }; /* Decode Pipeline: filesrc location=test.mp4 ! qtdemux ! qtivdec ! qtivtransform ! video/x-raw,format=BGR,width=1920,height=1080 ! appsink Display Pipeline: appsrc stream-type=0 is-live=true caps=video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! video/x-raw,format=NV12,width=1920,height=1080 ! waylandsink */ ================================================ FILE: application_develop/app/inc/appsrc.h ================================================ /* * @Description: Appsrc Pipeline Header. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 10:06:03 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-01 11:53:53 */ #pragma once #include "Common.h" typedef struct _SrcPipelineConfig { /*--------------appsrc caps--------------*/ std::string src_format; int src_width; int src_height; /*-------------videoconvert-------------*/ std::string conv_format; int conv_width; int conv_height; }SrcPipelineConfig; class SrcPipeline { public: SrcPipeline (const SrcPipelineConfig& config); bool Create (void); bool Start (void); bool Pause (void); bool Resume (void); void Destroy (void); void SetCallbacks (SrcGetDataFunc func, void* args); ~SrcPipeline (void); public: SrcGetDataFunc m_getDataFunc; void* m_getDataArgs; uint64_t m_timestamp; SrcPipelineConfig m_config; GstElement* m_srcPipeline; GstElement* m_appsrc; GstElement* m_videoconv; GstElement* m_capfilter; GstElement* m_display; }; /* Decode Pipeline: filesrc location=test.mp4 ! qtdemux ! qtivdec ! qtivtransform ! video/x-raw,format=BGR,width=1920,height=1080 ! appsink Display Pipeline: appsrc stream-type=0 is-live=true caps=video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! video/x-raw,format=NV12,width=1920,height=1080 ! waylandsink */ ================================================ FILE: application_develop/app/src/appsink.cpp ================================================ /* * @Description: Appsink Pipeline Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 09:57:03 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-01 12:41:27 */ #include "appsink.h" GstFlowReturn cb_appsink_new_sample ( GstElement* appsink, gpointer user_data) { // LOG_INFO_MSG ("cb_appsink_new_sample called, user data: %p", user_data); SinkPipeline* sp = reinterpret_cast (user_data); GstSample* sample = NULL; GstBuffer* buffer = NULL; GstMapInfo map; const GstStructure* info = NULL; GstCaps* caps = NULL; GstFlowReturn ret = GST_FLOW_OK; int sample_width = 0; int sample_height = 0; // equals to gst_app_sink_pull_sample (GST_APP_SINK_CAST (appsink), sample); g_signal_emit_by_name (appsink, "pull-sample", &sample, &ret); if (ret != GST_FLOW_OK) { LOG_ERROR_MSG ("can't pull GstSample."); return ret; } if (sample) { buffer = gst_sample_get_buffer (sample); if ( buffer == NULL ) { LOG_ERROR_MSG ("get buffer is null"); goto exit; } gst_buffer_map (buffer, &map, GST_MAP_READ); caps = gst_sample_get_caps (sample); if ( caps == NULL ) { LOG_ERROR_MSG ("get caps is null"); goto exit; } info = gst_caps_get_structure (caps, 0); if ( info == NULL ) { LOG_ERROR_MSG ("get info is null"); goto exit; } // ---- Read frame and convert to opencv format --------------- // convert gstreamer data to OpenCV Mat, you could actually // resolve height / width from caps... gst_structure_get_int (info, "width", &sample_width); gst_structure_get_int (info, "height", &sample_height); // appsink product queue produce { // init a cv::Mat with gst buffer address: deep copy if (map.data == NULL) { LOG_ERROR_MSG("appsink buffer data empty\n"); return GST_FLOW_OK; } cv::Mat img (sample_height, sample_width, CV_8UC3, (unsigned char*)map.data, cv::Mat::AUTO_STEP); img = img.clone(); if (sp->m_putDataFunc) { sp->m_putDataFunc(std::make_shared (img), sp->m_putDataArgs); } else { goto exit; } } } exit: if (buffer) { gst_buffer_unmap (buffer, &map); } if (sample) { gst_sample_unref (sample); } return GST_FLOW_OK; } static void cb_qtdemux_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; SinkPipeline* vp = reinterpret_cast (user_data); GstPad* v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_h264parse), "sink"); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (!g_str_has_prefix (new_pad_type, "video/x-h264")) { LOG_WARN_MSG ("It has type '%s' which is not raw video. Ignoring.", new_pad_type); goto exit; } /* Attempt the link */ ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link qtdemux and h264parse"); } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ gst_object_unref (v_sinkpad); } SinkPipeline::SinkPipeline (const SinkPipelineConfig& config) { m_config = config; } SinkPipeline::~SinkPipeline () { Destroy (); } bool SinkPipeline::Create (void) { GstCaps* m_transCaps; // decode pipeline if (!(m_sinkPipeline = gst_pipeline_new ("decode-pipeline"))) { LOG_ERROR_MSG ("Failed to create pipeline named decode-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus (GST_PIPELINE (m_sinkPipeline), true); if (!(m_source = gst_element_factory_make ("filesrc", "src"))) { LOG_ERROR_MSG ("Failed to create element filesrc named src"); goto exit; } g_object_set (G_OBJECT (m_source), "location", m_config.src.c_str(), NULL); gst_bin_add_many (GST_BIN (m_sinkPipeline), m_source, NULL); if (!(m_qtdemux = gst_element_factory_make ("qtdemux", "demux"))) { LOG_ERROR_MSG ("Failed to create element qtdemux named demux"); goto exit; } gst_bin_add_many (GST_BIN (m_sinkPipeline), m_qtdemux, NULL); if (!gst_element_link_many (m_source, m_qtdemux, NULL)) { LOG_ERROR_MSG ("Failed to link filesrc->qtdemux"); goto exit; } if (!(m_h264parse = gst_element_factory_make ("h264parse", "parse"))) { LOG_ERROR_MSG ("Failed to create element h264parse named parse"); goto exit; } gst_bin_add_many (GST_BIN (m_sinkPipeline), m_h264parse, NULL); // Link qtdemux with h264parse g_signal_connect (m_qtdemux, "pad-added", G_CALLBACK(cb_qtdemux_pad_added), reinterpret_cast (this)); if (!(m_decoder = gst_element_factory_make ("qtivdec", "decode"))) { LOG_ERROR_MSG ("Failed to create element qtivdec named decode"); goto exit; } gst_bin_add_many (GST_BIN (m_sinkPipeline), m_decoder, NULL); if (!(m_qtivtrans = gst_element_factory_make ("qtivtransform", "transform"))) { LOG_ERROR_MSG ("Failed to create element qtivtransform named transform"); goto exit; } gst_bin_add_many (GST_BIN (m_sinkPipeline), m_qtivtrans, NULL); m_transCaps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.conv_format.c_str(), "width", G_TYPE_INT, m_config.conv_width, "height", G_TYPE_INT, m_config.conv_height, NULL); if (!(m_capfilter = gst_element_factory_make("capsfilter", "capfilter"))) { LOG_ERROR_MSG ("Failed to create element capsfilter named capfilter"); goto exit; } g_object_set (G_OBJECT(m_capfilter), "caps", m_transCaps, NULL); gst_caps_unref (m_transCaps); gst_bin_add_many (GST_BIN (m_sinkPipeline), m_capfilter, NULL); if (!(m_appsink = gst_element_factory_make ("appsink", "appsink"))) { LOG_ERROR_MSG ("Failed to create element appsink named appsink"); goto exit; } // equals to gst_app_sink_set_emit_signals (GST_APP_SINK_CAST (m_appsink), true); g_object_set (m_appsink, "emit-signals", TRUE, NULL); // full definition of appsink callbacks /* GstAppSinkCallbacks callbacks = {cb_appsink_eos, cb_appsink_new_preroll, cb_appsink_new_sample}; gst_app_sink_set_callbacks (GST_APP_SINK_CAST (m_appsink), &callbacks, reinterpret_cast (this), NULL); */ g_signal_connect (m_appsink, "new-sample", G_CALLBACK (cb_appsink_new_sample), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_sinkPipeline), m_appsink, NULL); if (!gst_element_link_many (m_h264parse, m_decoder, m_qtivtrans, m_capfilter, m_appsink, NULL)) { LOG_ERROR_MSG ("Failed to link h264parse->qtivdec->" "qtivtransfrom->capfilter->appsink"); goto exit; } return true; exit: LOG_ERROR_MSG ("Failed to create video pipeline"); return false; } bool SinkPipeline::Start (void) { if (GST_STATE_CHANGE_FAILURE == gst_element_set_state (m_sinkPipeline, GST_STATE_PLAYING)) { LOG_ERROR_MSG ("Failed to set decode pipeline to playing state"); return false; } return true; } bool SinkPipeline::Pause (void) { GstState state, pending; LOG_INFO_MSG ("StopPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_sinkPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of decode pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state (m_sinkPipeline, GST_STATE_PAUSED); gst_element_get_state (m_sinkPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of decode pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } bool SinkPipeline::Resume (void) { GstState state, pending; LOG_INFO_MSG ("StartPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_sinkPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of decode pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state (m_sinkPipeline, GST_STATE_PLAYING); gst_element_get_state (m_sinkPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of decode pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } void SinkPipeline::Destroy (void) { if (m_sinkPipeline) { gst_element_set_state (m_sinkPipeline, GST_STATE_NULL); gst_object_unref (m_sinkPipeline); m_sinkPipeline = NULL; } } void SinkPipeline::SetCallbacks (SinkPutDataFunc func, void* args) { LOG_INFO_MSG ("sink set callback called"); m_putDataFunc = func; m_putDataArgs = args; } ================================================ FILE: application_develop/app/src/appsrc.cpp ================================================ /* * @Description: Appsrc Pipeline Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 09:57:13 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-01 12:41:48 */ #include "appsrc.h" GstFlowReturn cb_appsrc_need_data ( GstElement* appsrc, guint length, gpointer user_data) { // LOG_INFO_MSG ("cb_appsrc_need_data called, user_data: %p", user_data); SrcPipeline* sp = reinterpret_cast (user_data); GstBuffer* buffer; GstMapInfo map; GstFlowReturn ret = GST_FLOW_OK; std::shared_ptr img; if (sp->m_getDataFunc) { img = sp->m_getDataFunc (sp->m_getDataArgs); int len = img->total() * img->elemSize(); // zero-copy GstBuffer // buffer = gst_buffer_new_wrapped(img->data, len); buffer = gst_buffer_new_allocate (NULL, len, NULL); gst_buffer_map(buffer,&map,GST_MAP_READ); memcpy(map.data, img->data, len); GST_BUFFER_PTS (buffer) = sp->m_timestamp; GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 25); sp->m_timestamp += GST_BUFFER_DURATION (buffer) ; // equals to gst_app_src_push_buffer (GST_APP_SRC_CAST (appsrc), buffer); g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret); gst_buffer_unmap(buffer, &map); gst_buffer_unref (buffer); if (ret != GST_FLOW_OK) { /* something wrong, stop pushing */ LOG_ERROR_MSG ("push-buffer failed"); } } // usleep (25 * 1000); return ret; } SrcPipeline::SrcPipeline (const SrcPipelineConfig& config) { m_config = config; m_timestamp = 0; } SrcPipeline::~SrcPipeline () { Destroy (); } bool SrcPipeline::Create (void) { GstCaps* m_transCaps; // display pipeline if (!(m_srcPipeline = gst_pipeline_new ("display-pipeline"))) { LOG_ERROR_MSG ("Failed to create pipeline named display-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus (GST_PIPELINE (m_srcPipeline), true); if (!(m_appsrc = gst_element_factory_make ("appsrc", "appsrc"))) { LOG_ERROR_MSG ("Failed to create element appsrc named appsrc"); goto exit; } m_transCaps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.src_format.c_str(), "width", G_TYPE_INT, m_config.src_width, "height", G_TYPE_INT, m_config.src_height, NULL); // equals to gst_app_src_set_caps (GST_APP_SRC_CAST (m_appsrc), m_transCaps); g_object_set (G_OBJECT(m_appsrc), "caps", m_transCaps, NULL); gst_caps_unref (m_transCaps); // equals to gst_app_src_set_stream_type (GST_APP_SRC_CAST (m_appsrc), // GST_APP_STREAM_TYPE_STREAM); g_object_set (G_OBJECT(m_appsrc), "stream-type", GST_APP_STREAM_TYPE_STREAM, NULL); g_object_set (G_OBJECT(m_appsrc), "is-live", true, NULL); // full definition of appsrc callbacks /* GstAppSrcCallbacks callbacks = {cb_appsrc_need_data, cb_appsrc_enough_data, cb_appsrc_seek_data}; gst_app_src_set_callbacks (GST_APP_SRC_CAST (m_appsrc), &callbacks, reinterpret_cast (this), NULL); */ g_signal_connect (m_appsrc, "need-data", G_CALLBACK (cb_appsrc_need_data), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_srcPipeline), m_appsrc, NULL); if (!(m_videoconv = gst_element_factory_make ("videoconvert", "videoconv"))) { LOG_ERROR_MSG ("Failed to create element videoconvert named videoconv"); goto exit; } gst_bin_add_many (GST_BIN (m_srcPipeline), m_videoconv, NULL); m_transCaps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.conv_format.c_str(), "width", G_TYPE_INT, m_config.conv_width, "height", G_TYPE_INT, m_config.conv_height, NULL); if (!(m_capfilter = gst_element_factory_make("capsfilter", "capfilter"))) { LOG_ERROR_MSG ("Failed to create element capsfilter named capfilter"); goto exit; } g_object_set (G_OBJECT(m_capfilter), "caps", m_transCaps, NULL); gst_caps_unref (m_transCaps); gst_bin_add_many (GST_BIN (m_srcPipeline), m_capfilter, NULL); if (!(m_display = gst_element_factory_make ("waylandsink", "display"))) { LOG_ERROR_MSG ("Failed to create element waylandsink named display"); goto exit; } gst_bin_add_many (GST_BIN (m_srcPipeline), m_display, NULL); if (!gst_element_link_many (m_appsrc, m_videoconv, m_capfilter,m_display, NULL)) { LOG_ERROR_MSG ("Failed to link h264parse->qtivdec->waylandsink"); goto exit; } return true; exit: LOG_ERROR_MSG ("Failed to create video pipeline"); return false; } bool SrcPipeline::Start (void) { if (GST_STATE_CHANGE_FAILURE == gst_element_set_state (m_srcPipeline, GST_STATE_PLAYING)) { LOG_ERROR_MSG ("Failed to set display pipeline to playing state"); return false; } return true; } bool SrcPipeline::Pause (void) { GstState state, pending; LOG_INFO_MSG ("StopPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_srcPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of display pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state (m_srcPipeline, GST_STATE_PAUSED); gst_element_get_state (m_srcPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of display pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } bool SrcPipeline::Resume (void) { GstState state, pending; LOG_INFO_MSG ("StartPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_srcPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of display pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state (m_srcPipeline, GST_STATE_PLAYING); gst_element_get_state (m_srcPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of display pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } void SrcPipeline::Destroy (void) { if (m_srcPipeline) { gst_element_set_state (m_srcPipeline, GST_STATE_NULL); gst_object_unref (m_srcPipeline); m_srcPipeline = NULL; } } void SrcPipeline::SetCallbacks (SrcGetDataFunc func, void* args) { LOG_INFO_MSG ("src set callback called"); m_getDataFunc = func; m_getDataArgs = args; } ================================================ FILE: application_develop/app/src/main.cpp ================================================ /* * @Description: Test Program. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 09:17:16 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-30 13:34:17 */ #include #include #include #include #include "appsink.h" #include "appsrc.h" #include "DoubleBufferCache.h" static GMainLoop* g_main_loop = NULL; static bool validateSrcUri (const char* name, const std::string& value) { if (!value.compare("")) { LOG_ERROR_MSG ("Source Uri required!"); return false; } struct stat statbuf; if (!stat(value.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.c_str()); return true; } LOG_ERROR_MSG ("Invalid config file."); return false; } DEFINE_string (srcuri, "", "algorithm library with APIs: alg{Init/Proc/Ctrl/Fina}"); DEFINE_validator (srcuri, &validateSrcUri); /** * @brief: Appsink transparent interface, draw a green rectangle * @Author: Ricardo Lu * @param {shared_ptr} img - appsink image need to be drawn * @param {void*} user_data - thread global data buffer structure * @return {*} */ void putData (std::shared_ptr img, void* user_data) { // LOG_INFO_MSG ("putData called"); DoubleBufCache* db = reinterpret_cast*> (user_data); std::string sinkwords ("appsink"); cv::Point fontpos= cv::Point (100, 115); cv::Scalar fontcolor(150, 255, 40); cv::putText(*img, sinkwords, fontpos, cv::FONT_HERSHEY_COMPLEX, 0.8, fontcolor, 2, 0.3); cv::Rect rect (100, 100, 1720, 880); cv::Scalar rectcolor(0, 200, 0); cv::rectangle (*img, rect, rectcolor, 3); db->feed(img); } /** * @brief: Appsrc transparent interface, draw a red rectangle * @Author: Ricardo Lu * @param {void*} user_data - thread global data buffer structure * @return std::shared_ptr - return to appsrc */ std::shared_ptr getData (void* user_data) { // LOG_INFO_MSG ("getData called"); DoubleBufCache* db = reinterpret_cast*> (user_data); std::shared_ptr img; img = db->fetch(); std::string srcwords ("appsrc"); cv::Point fontpos= cv::Point (1700, 970); cv::Scalar fontcolor(150, 255, 40); cv::putText(*img, srcwords, fontpos, cv::FONT_HERSHEY_COMPLEX, 0.8, fontcolor, 2, 0.3); cv::Rect rect (110, 110, 1720, 880); cv::Scalar rectcolor(0, 0, 200); cv::rectangle (*img, rect, rectcolor, 3); return img; } int main(int argc, char* argv[]) { google::ParseCommandLineFlags (&argc, &argv, true); SinkPipelineConfig m_sinkConfig; SinkPipeline* m_sinkPipeline; SrcPipelineConfig m_srcCofig; SrcPipeline* m_srcPipeline; SinkPutDataFunc m_sinkPutDataFunc; SrcGetDataFunc m_srcGetDataFunc; DoubleBufCache* m_bufferCache; gst_init(&argc, &argv); if (!(g_main_loop = g_main_loop_new (NULL, FALSE))) { LOG_ERROR_MSG ("Failed to new a object with type GMainLoop"); goto exit; } m_sinkConfig.src = FLAGS_srcuri; m_sinkConfig.conv_format = "BGR"; m_sinkConfig.conv_width = 1920; m_sinkConfig.conv_height = 1080; m_srcCofig.src_format = "BGR"; m_srcCofig.src_width = 1920; m_srcCofig.src_height = 1080; m_srcCofig.conv_format = "NV12"; m_srcCofig.conv_width = 1920; m_srcCofig.conv_height = 1080; m_sinkPipeline = new SinkPipeline(m_sinkConfig); m_srcPipeline = new SrcPipeline(m_srcCofig); m_sinkPutDataFunc = std::bind(putData, std::placeholders::_1, std::placeholders::_2); m_srcGetDataFunc = std::bind(getData, std::placeholders::_1); m_bufferCache = new DoubleBufCache (); m_sinkPipeline->SetCallbacks(m_sinkPutDataFunc, m_bufferCache); m_srcPipeline->SetCallbacks(m_srcGetDataFunc, m_bufferCache); if (!m_sinkPipeline->Create()) { LOG_ERROR_MSG ("Pipeline Create failed."); goto exit; } if (!m_srcPipeline->Create()) { LOG_ERROR_MSG ("Pipeline Create failed."); goto exit; } m_sinkPipeline->Start(); // appsink is slower than appsrc, so delay appsrc pipeline playing sleep(1); m_srcPipeline->Start(); g_main_loop_run (g_main_loop); exit: if (g_main_loop) g_main_loop_unref (g_main_loop); if (m_sinkPipeline) { m_sinkPipeline->Destroy(); delete m_sinkPipeline; m_sinkPipeline = NULL; } if (m_srcPipeline) { m_srcPipeline->Destroy(); delete m_srcPipeline; m_srcPipeline = NULL; } if (m_bufferCache) { delete m_bufferCache; m_bufferCache = NULL; } google::ShutDownCommandLineFlags (); return 0; } ================================================ FILE: application_develop/build_pipeline/CMakeLists.txt ================================================ # created by Ricardo Lu in 08/28/2021 cmake_minimum_required(VERSION 3.10) project(build_pipeline) set(CMAKE_CXX_STANDARD 11) include(FindPkgConfig) pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) pkg_check_modules(GLIB REQUIRED glib-2.0) pkg_check_modules(GFLAGS REQUIRED gflags) include_directories( ${PROJECT_SOURCE_DIR}/inc ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ${GLIB_INCLUDE_DIRS} ${GFLAGS_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ${GLIB_LIBRARY_DIRS} ${GFLAGS_LIBRARY_DIRS} ) OPTION(COMPILE_PARSE_LAUNCH "build gst_parse_launch" OFF) OPTION(COMPILE_FACTORY_MAKE "build gst_element_factory_make" OFF) if(COMPILE_PARSE_LAUNCH) add_definitions(-DPARSE_LAUNCH) add_executable(GstParse src/VideoPipeline.cpp src/gst_parse_launch.cpp ) target_link_libraries(GstParse ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ) endif(COMPILE_PARSE_LAUNCH) if(COMPILE_FACTORY_MAKE) add_definitions(-DFACTORY_MAKE) add_executable(GstElementFactory src/VideoPipeline.cpp src/gst_element_factory_make.cpp ) target_link_libraries(GstElementFactory ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ) endif(COMPILE_FACTORY_MAKE) ================================================ FILE: application_develop/build_pipeline/README.md ================================================ # Build Pipeline GStreamer提供了一个命令行工具`gst-launch-1.0`用于快速构建运行Pipeline,同样的GStreamer也提供了C-API用于在C/C++应用开发中引入GStreamer Pipeline,本教程是构建GStreamer Pipeline的两种方式的代码实例。 **教程地址:[Build Pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline)** 相关文档: - [GstParse](https://gstreamer.freedesktop.org/documentation/gstreamer/gstparse.html?gi-language=c#gstparse-page) - [GstElement](https://gstreamer.freedesktop.org/documentation/gstreamer/gstelement.html?gi-language=c#gst_element_link) - [GstElementFactory](https://gstreamer.freedesktop.org/documentation/gstreamer/gstelementfactory.html?gi-language=c#gstelementfactory-page) ## Build ```cmake # CMakeLists.txt Option # Build gst_parse_launch.cpp ## OPTION(COMPILE_PARSE_LAUNCH "build gst_parse_launch" OFF) cmake -H. -Bbuild -DCOMPILE_PARSE_LAUNCH=ON -DCOMPILE_FACTORY_MAKE=OFF # Build gst_element_factory_make.cpp ## OPTION(COMPILE_FACTORY_MAKE "build gst_element_factory_make" OFF) cmake -H. -Bbuild -DCOMPILE_PARSE_LAUNCH=OFF -DCOMPILE_FACTORY_MAKE=ON cd build make ``` ### Run ```shell # gst_parse_launch ./GstParse --srcuri ../video.mp4 # gst_element_factory_make ./GstElementFactory --srcuri ../video.mp4 ``` 上述程序运行结果等同于如下Pipeline: ```shell gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! qtivdec ! waylandsink ``` ================================================ FILE: application_develop/build_pipeline/doc/build_pipeline.md ================================================ # Build Pipeline GStreamer提供了一个命令行工具`gst-launch-1.0`用于快速构建运行Pipeline,同样的GStreamer也提供了C-API用于在C/C++开发中引入GStreamer Pipeline,以下是构建GStreamer Pipeline的两种方式。 - **Github:[build-pipeline](https://github.com/gesanqiu/gstreamer-example/tree/main/application_develop/build_pipeline)** ## gst_parse_launch() `gst_parse_launch()`是[GstParse](https://gstreamer.freedesktop.org/documentation/gstreamer/gstparse.html?gi-language=c#gstparse-page)的一个函数,`GstParse`允许开发者基于`gst-launch-1.0`命令行形式创建一个新的pipeline。 注:相关函数采取了一些措施来创建动态管道。因此这样的管道并不总是可重用的(例如,将状态设置为NULL并返回到播放)。 ```c GstElement * gst_parse_launch (const gchar * pipeline_description, GError ** error) ``` 基于`gst-launch-1.0`命令行形式创建一个新的pipeline。 - `pipeline_description`:描述pipeline的命令行字符串 - `error`:错误提示信息 ### GstParseError - `GST_PARSE_ERROR_SYNTAX (0)`:Pipeline格式错误 - `GST_PARSE_ERROR_NO_SUCH_ELEMENT (1)`:Pipeline包含未知GstElement(Plugin) - `GST_PARSE_ERROR_NO_SUCH_PROPERTY (2)`:Pipeline中某个GstElment(Plugin)设置了不存在属性 - `GST_PARSE_ERROR_LINK (3)`:Pipeline中某对Plugin之间的GstPad无法连接 - `GST_PARSE_ERROR_COULD_NOT_SET_PROPERTY (4)`:Pipeline中某个GstElment(Plugin)的属性设置错误 - `GST_PARSE_ERROR_EMPTY_BIN (5)`:Pipeline中引入了空的GstBin - `GST_PARSE_ERROR_EMPTY (6)`:Pipeline为空 - `GST_PARSE_ERROR_DELAYED_LINK (7)`:Pipeline中某个GstPad存在阻塞 ### Code Example ```c++ #include #include #include int main(int argc, char* argv[]) { GstElement* pipeline; GError* error = NULL; std::string m_strPipeline("filesrc location=test.mp4 ! qtdemux ! qtivdec ! waylandsink"); pipeline = gst_parse_launch (m_pipeline.c_str(), &error); if ( error != NULL ) { printf ("Could not construct pipeline: %s", error->message); g_clear_error (&error); } // ... return 0; } ``` 使用`gst_parse_launch()`解析完之后就能够获得一条`GstPipeline`,然后就可以使用`gst_element_set_state()`来运行pipeline了。 ## gst_element_factory_make() `gst_element_factory_make()`是[GstElementFactory](https://gstreamer.freedesktop.org/documentation/gstreamer/gstelementfactory.html?gi-language=c#gstelementfactory-page)的一个函数,`GstElementFactory`用于实例化一个`GstElement`。 开发人员可以使用 `gst_element_factory_find()` 和`gst_element_factory_create()`来实例化一个GstElement或者直接使用`gst_element_factory_make()`来实例化。 ```c++ #include int main(int argc, char* argv[]) { GstElement* src; GstElementFactory* srcfactory; gst_init (&argc, &argv); { srcfactory = gst_element_factory_find ("filesrc"); g_return_if_fail (srcfactory != NULL); src = gst_element_factory_create (srcfactory, "src"); g_return_if_fail (src != NULL); }/*equals*/{ src = gst_element_factory_make ("filesrc", "src"); } // ... return 0; } ``` `gst_element_factory_make()`只是创建并实例化单个的`GstElement`,如果要构建`GstPipeline`,那么还需要将一系列的`GstElement`添加到`GstPipeline`中,并按照正确的顺序链接。 ### gst_bin_add_many() `gst_bin_add_many()`函数将`GstElement`添加到pipeline中(不区分先后顺序) `gst_bin_add_many()`只是将各个`GstElement`加入一个`GstBin`中,即`GstElement`的`parent`指针指向同一个`GstBin/GstPipeline`,这种添加是无序的,开发人员需要使用`gst_element_link_many()`来将这些`GstElement`链接起来。 ### gst_element_link_many() ```c gboolean gst_element_link_many (GstElement * element_1, GstElement * element_2, ... ...) ``` - 参数为按顺序排列的`GstElement`变量,并且需要以`NULL`结尾。 - 在链接之前需要调用`gst_bin_add_many()`,确保所有的`GstElement`属于同一个`GstBin/GstPipeline`。 ### gst_element_link_pads() ```c // gst_element_link_many()的调用流程 gst_element_link_many() ->gst_element_link() ->gst_element_link_pads() ->gst_element_link_pads_full() ``` 观察调用流程可以看到两个`GstElement`的链接实际是两个`GstPad`的链接,即`src pad`与`sink pad`的链接。使用`gst-inspect-1.0`查看大部分GStreamer的插件的`sink pad`和`src pad`的`Availability`属性都是always,这意味着插件之间总是可以链接,但是也存在一些特例,比如说`qtdemux`: ```shell Pad Templates: SINK template: 'sink' Availability: Always Capabilities: video/quicktime video/mj2 audio/x-m4a application/x-3gp SRC template: 'video_%u' Availability: Sometimes Capabilities: ANY SRC template: 'audio_%u' Availability: Sometimes Capabilities: ANY SRC template: 'subtitle_%u' Availability: Sometimes Capabilities: ANY ``` 可以看到`qtdemux`具有三种`src pad`,在link阶段数据并未流通,这时候`qtdemux`的下一个插件无法知道`qtdemux`将会使用哪一个`src pad`,因此在链接`qtdemux`和其他插件时需要根据实际情况来分析。 在`GstElement`从`READY`状态切换到`PAUSED`状态时,上游`GstElement`的数据将会预流到`qtdemux`,在这个时候,`qtdemux`将会解析数据,然后配置stream信息,根据数据创建相应的`src pad`,完成这个操作之后,将会通过`gst_element_add_pad()`将`GstPad`添加到`qtdemux`,在gst_element_add_pad()中,有以下这样的一行代码: ``` /* emit the PAD_ADDED signal */ g_signal_emit (element, gst_element_signals[PAD_ADDED], 0, pad); ``` 这意味着当qtdemux创建完src pad的时候,将会发出一个信号,于是我们可以给qtdemux添加一个回调,接收这个`pad-added`信号,再调用gst_element_link_many()完成链接: ```c++ static void cb_qtdemux_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_pad_added called"); GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; GstPad* v_sinkpad = NULL; GstPad* a_sinkpad = NULL; VideoPipeline* vp = reinterpret_cast (user_data); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (g_str_has_prefix (new_pad_type, "video/x-h264")) { LOG_INFO_MSG ("Linking video/x-raw"); /* Attempt the link */ v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_queue0), "sink"); ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link video source with waylandsink"); goto exit; } } else if (g_str_has_prefix (new_pad_type, "audio/mpeg")) { LOG_INFO_MSG ("Linking audio/x-raw"); a_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_queue1), "sink"); ret = gst_pad_link (new_pad, a_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link audio source and audioconvert"); goto exit; } } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ if (v_sinkpad) gst_object_unref (v_sinkpad); if (a_sinkpad) gst_object_unref (a_sinkpad); } // pipeline create { g_signal_connect (m_qtdemux, "pad-added", G_CALLBACK(qtdemux_pad_added_cb), reinterpret_cast (this)); } ``` 当文件源既有音频数据又有视频数据的时候,`pad-added`信号会触发两次`qtdemux_pad_added_cb`回调,为了完成正确的链接,`qtdemux`会根据解析出的数据格式创建不同的`src-pad`,`src-pad`下包含一个描述描述数据格式的`GstCaps`,我们可以获取`GstCaps`的`GstStructure`来获取到数据格式内容。 但这存在一个问题就是我们需要提前知道数据源的数据格式才能够选择正确的插件,视频目前通常的编码格式是`h264//h265`,音频有很多,假如是文件数据源可以使用`ffmpeg`工具来读取这部分信息: ```shell ts@ts-OptiPlex-7070:~/Downloads$ ffmpeg -i 1-2.mp4 ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609 configuration: --enable-nonfree --enable-pic --enable-shared --prefix=/usr/local/ffmpeg libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x174a280] st: 0 edit list: 2 Missing key frame while searching for timestamp: 0 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x174a280] st: 0 edit list 2 Cannot find an index entry before timestamp: 0. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '1-2.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2021-01-29T02:37:38.000000Z location : +40.0004+116.3568/ location-eng : +40.0004+116.3568/ com.android.version: 11 com.android.manufacturer: Xiaomi com.android.model: M2011K2C Duration: 00:03:39.03, start: 0.000000, bitrate: 12391 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 12233 kb/s, SAR 1:1 DAR 16:9, 29.99 fps, 30 tbr, 90k tbn, 60 tbc (default) Metadata: creation_time : 2021-01-29T02:37:38.000000Z handler_name : VideoHandle Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default) Metadata: creation_time : 2021-01-29T02:37:38.000000Z handler_name : SoundHandle At least one output file must be specified ``` 可以看到我用的测试文件视频编码格式为`h264`,音频编码格式为`aac`,因此`qtdemux`解复用出来的音频应该接`h264parse`,音频应该接`aacparse`两个插件。然后通过`gst-inspect-1.0`查看这两个插件的`sink pad`能够接收的数据类型分别为`video/x-h264`和`audio/mpeg`,顺利完成caps的筛选和链接。 注:`gst_parse_launch()`因为使用了延迟链接,所以没有这个限制。 ## 对比 在开发过程中,两种构建手段各有优劣: - `gst_parse_launch`方便快捷,通常只要pipeline能通过`gst-launch-1.0`命令行工具成功运行起来也就能成功将pipeline构建起来,相对`gst_element_factory_make`来说代码量极少,在快速开发过程中具有不可比拟的优势,但是构建出来的pipeline过于依赖于Common Line String,动态调整能力不如`gst_element_factory_make`,并且由于内部实现对开发人员来说并不透明,因此也不便于应用开发。 - `gst_element_factory_make`开发人员自己精确维护每个插件(GstElement)的生存,相比`gst_parse_launch`拥有更加精细的控制粒度,这就要求开发人员对pipeline将要处理的数据和会用到的相关插件具有比较深刻的理解,上手难度较高。 ================================================ FILE: application_develop/build_pipeline/inc/Common.h ================================================ /* * @Description: Common Utils. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:24:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-28 03:18:09 */ #pragma once #include #include #include #include #define LOG_ERROR_MSG(msg, ...) \ g_print("** ERROR: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_INFO_MSG(msg, ...) \ g_print("** INFO: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_WARN_MSG(msg, ...) \ g_print("** WARN: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) ================================================ FILE: application_develop/build_pipeline/inc/VideoPipeline.h ================================================ /* * @Description: GstPipeline common header. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 08:11:39 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-31 14:06:46 */ #pragma once #include "Common.h" typedef struct _VideoPipelineConfig { std::string src; }VideoPipelineConfig; class VideoPipeline { public: VideoPipeline (const VideoPipelineConfig& config); bool Create (void); bool Start (void); bool Pause (void); bool Resume (void); void Destroy (void); #ifdef PARSE_LAUNCH bool SetPipeline (std::string& pipeline); #endif ~VideoPipeline(void); public: VideoPipelineConfig m_config; GstElement* m_gstPipeline; #ifdef PARSE_LAUNCH std::string m_pipeline; #endif #ifdef FACTORY_MAKE GstElement* m_source; GstElement* m_qtdemux; GstElement* m_queue0; GstElement* m_queue1; GstElement* m_h264parse; GstElement* m_vdecoder; GstElement* m_display; GstElement* m_aacparse; GstElement* m_adecoder; GstElement* m_audioConv; GstElement* m_audioReSample; GstElement* m_player; #endif }; /* * gst-launch-1.0 filesrc location=test.mp4 ! qtdemux name=demux demux. ! \ * queue ! h264parse ! qtivdec ! waylandsink demux. ! queue ! aacparse ! \ * avdec_aac ! audioconvert ! audioresample ! pulsesink volume=1 */ ================================================ FILE: application_develop/build_pipeline/src/VideoPipeline.cpp ================================================ /* * @Description: Implement of VideoPipeline. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:01:39 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-09-01 12:40:47 */ #include "VideoPipeline.h" #ifdef FACTORY_MAKE static void cb_qtdemux_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_pad_added called"); GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; GstPad* v_sinkpad = NULL; GstPad* a_sinkpad = NULL; VideoPipeline* vp = reinterpret_cast (user_data); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (g_str_has_prefix (new_pad_type, "video/x-h264")) { LOG_INFO_MSG ("Linking video/x-raw"); /* Attempt the link */ v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_queue0), "sink"); ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link video source with waylandsink"); goto exit; } } else if (g_str_has_prefix (new_pad_type, "audio/mpeg")) { LOG_INFO_MSG ("Linking audio/x-raw"); a_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_queue1), "sink"); ret = gst_pad_link (new_pad, a_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link audio source and audioconvert"); goto exit; } } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ if (v_sinkpad) gst_object_unref (v_sinkpad); if (a_sinkpad) gst_object_unref (a_sinkpad); } #endif VideoPipeline::VideoPipeline (const VideoPipelineConfig& config) { m_config = config; } VideoPipeline::~VideoPipeline () { Destroy (); } #ifdef PARSE_LAUNCH bool VideoPipeline::SetPipeline (std::string& pipeline) { m_pipeline = pipeline; } #endif bool VideoPipeline::Create (void) { #ifdef PARSE_LAUNCH GError *error = NULL; LOG_INFO_MSG ("Parsing pipeline: %s", m_pipeline.c_str()); m_gstPipeline = gst_parse_launch (m_pipeline.c_str(), &error); if ( error != NULL ) { LOG_ERROR_MSG ("Could not construct pipeline: %s", error->message); g_clear_error (&error); goto exit; } return true; #endif #ifdef FACTORY_MAKE if (!(m_gstPipeline = gst_pipeline_new ("video-pipeline"))) { LOG_ERROR_MSG ("Failed to create pipeline named video-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus (GST_PIPELINE (m_gstPipeline), true); if (!(m_source = gst_element_factory_make ("filesrc", "src"))) { LOG_ERROR_MSG ("Failed to create element filesrc named src"); goto exit; } g_object_set (G_OBJECT (m_source), "location", m_config.src.c_str(), NULL); gst_bin_add_many (GST_BIN (m_gstPipeline), m_source, NULL); if (!(m_qtdemux = gst_element_factory_make ("qtdemux", "demux"))) { LOG_ERROR_MSG ("Failed to create element qtdemux named demux"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_qtdemux, NULL); if (!(m_queue0 = gst_element_factory_make ("queue", "queue0"))) { LOG_ERROR_MSG ("Failed to create element queue named queue0"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_queue0, NULL); if (!(m_queue1 = gst_element_factory_make ("queue", "queue1"))) { LOG_ERROR_MSG ("Failed to create element queue named queue1"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_queue1, NULL); if (!gst_element_link_many (m_source, m_qtdemux, NULL)) { LOG_ERROR_MSG ("Failed to link filesrc->qtdemux"); goto exit; } if (!(m_h264parse = gst_element_factory_make ("h264parse", "vparse"))) { LOG_ERROR_MSG ("Failed to create element h264parse named vparse"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_h264parse, NULL); // Link qtdemux src-pad with queue g_signal_connect (m_qtdemux, "pad-added", G_CALLBACK(cb_qtdemux_pad_added), reinterpret_cast (this)); if (!(m_vdecoder = gst_element_factory_make ("qtivdec", "vdecoder"))) { LOG_ERROR_MSG ("Failed to create element qtivdec named vdecoder"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_vdecoder, NULL); if (!(m_display = gst_element_factory_make ("waylandsink", "display"))) { LOG_ERROR_MSG ("Failed to create element waylandsink named display"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_display, NULL); if (!gst_element_link_many (m_queue0, m_h264parse, m_vdecoder, m_display, NULL)) { LOG_ERROR_MSG ("Failed to link queue->h264parse->qtivdec->waylandsink"); goto exit; } if (!(m_aacparse = gst_element_factory_make ("aacparse", "aparse"))) { LOG_ERROR_MSG ("Failed to create element aacparse named aparse"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_aacparse, NULL); if (!(m_adecoder = gst_element_factory_make ("avdec_aac", "adecoder"))) { LOG_ERROR_MSG ("Failed to create element avdec_aac named adecoder"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_adecoder, NULL); if (!(m_audioConv = gst_element_factory_make ("audioconvert", "audioconv"))) { LOG_ERROR_MSG ("Failed to create element audioconvert named audioconv"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_audioConv, NULL); if (!(m_audioReSample = gst_element_factory_make ("audioresample", "aresample"))) { LOG_ERROR_MSG ("Failed to create element audioresample named aresample"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_audioReSample, NULL); if (!(m_player = gst_element_factory_make ("pulsesink", "player"))) { LOG_ERROR_MSG ("Failed to create element plusesink named player"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_player, NULL); g_object_set (G_OBJECT (m_player), "volume", 1.0, NULL); if (!gst_element_link_many (m_queue1, m_aacparse, m_adecoder, m_audioConv, m_audioReSample, m_player, NULL)) { LOG_ERROR_MSG ("Failed to link queue->aacparse->avdec_aac->audioconvert" "->audioresample->pulsesink"); goto exit; } return true; #endif exit: LOG_ERROR_MSG ("Failed to create video pipeline"); return false; } bool VideoPipeline::Start (void) { if (GST_STATE_CHANGE_FAILURE == gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING)) { LOG_ERROR_MSG ("Failed to set pipeline to playing state"); return false; } return true; } bool VideoPipeline::Pause (void) { GstState state, pending; LOG_INFO_MSG ("StopPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state (m_gstPipeline, GST_STATE_PAUSED); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } bool VideoPipeline::Resume (void) { GstState state, pending; LOG_INFO_MSG ("StartPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } void VideoPipeline::Destroy (void) { if (m_gstPipeline) { gst_element_set_state (m_gstPipeline, GST_STATE_NULL); gst_object_unref (m_gstPipeline); m_gstPipeline = NULL; } } ================================================ FILE: application_develop/build_pipeline/src/gst_element_factory_make.cpp ================================================ /* * @Description: Build GstPipeline with GstElementFactory. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 08:11:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-28 03:39:18 */ #include #include #include "VideoPipeline.h" static GMainLoop* g_main_loop = NULL; static bool validateSrcUri(const char* name, const std::string& value) { if (!value.compare("")) { LOG_ERROR_MSG ("Source Uri required!"); return false; } struct stat statbuf; if (!stat(value.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.c_str()); return true; } LOG_ERROR_MSG ("Invalid config file."); return false; } DEFINE_string(srcuri, "", "algorithm library with APIs: alg{Init/Proc/Ctrl/Fina}"); DEFINE_validator(srcuri, &validateSrcUri); int main(int argc, char* argv[]) { google::ParseCommandLineFlags (&argc, &argv, true); VideoPipelineConfig m_vpConfig; VideoPipeline *m_vp; gst_init(&argc, &argv); if (!(g_main_loop = g_main_loop_new (NULL, FALSE))) { LOG_ERROR_MSG ("Failed to new a object with type GMainLoop"); goto exit; } m_vpConfig.src = FLAGS_srcuri; m_vp = new VideoPipeline(m_vpConfig); if (!m_vp->Create()) { LOG_ERROR_MSG ("Pipeline Create failed."); goto exit; } m_vp->Start(); g_main_loop_run (g_main_loop); exit: if (g_main_loop) g_main_loop_unref (g_main_loop); if (m_vp) { m_vp->Destroy(); delete m_vp; m_vp = NULL; } google::ShutDownCommandLineFlags (); return 0; } ================================================ FILE: application_develop/build_pipeline/src/gst_parse_launch.cpp ================================================ /* * @Description: Build GstPipeline with GstParse. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 08:11:04 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-28 03:30:43 */ #include #include #include #include #include "VideoPipeline.h" static GMainLoop* g_main_loop = NULL; static bool validateSrcUri(const char* name, const std::string& value) { if (!value.compare("")) { LOG_ERROR_MSG ("Source Uri required!"); return false; } struct stat statbuf; if (!stat(value.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.c_str()); return true; } LOG_ERROR_MSG ("Invalid config file."); return false; } DEFINE_string(srcuri, "", "algorithm library with APIs: alg{Init/Proc/Ctrl/Fina}"); DEFINE_validator(srcuri, &validateSrcUri); int main(int argc, char* argv[]) { google::ParseCommandLineFlags (&argc, &argv, true); VideoPipelineConfig m_vpConfig; VideoPipeline *m_vp; std::ostringstream m_pipelineCmd; std::string m_strPipeline; gst_init(&argc, &argv); if (!(g_main_loop = g_main_loop_new (NULL, FALSE))) { LOG_ERROR_MSG ("Failed to new a object with type GMainLoop"); goto exit; } m_vpConfig.src = FLAGS_srcuri; m_vp = new VideoPipeline(m_vpConfig); m_pipelineCmd << "filesrc location=" << m_vpConfig.src << " ! "; m_pipelineCmd << "qtdemux name=demux demux. ! queue ! h264parse ! "; m_pipelineCmd << "qtivdec ! waylandsink demux. ! queue ! aacparse ! "; m_pipelineCmd << "avdec_aac ! audioconvert ! audioresample ! pulsesink"; m_strPipeline = m_pipelineCmd.str(); m_vp->SetPipeline(m_strPipeline); if (!m_vp->Create()) { LOG_ERROR_MSG ("Pipeline Create failed: lack of elements"); goto exit; } m_vp->Start(); g_main_loop_run (g_main_loop); exit: if (g_main_loop) g_main_loop_unref (g_main_loop); if (m_vp) { m_vp->Destroy(); delete m_vp; m_vp = NULL; } google::ShutDownCommandLineFlags (); return 0; } ================================================ FILE: application_develop/custom_user_plugin/CMakeLists.txt ================================================ # create by Ricardo Lu in 05-19-2022 ================================================ FILE: application_develop/custom_user_plugin/README.md ================================================ # gstreamer-example Gstreamer开发教程。 ================================================ FILE: application_develop/custom_user_plugin/config.h ================================================ /* * @Description: Config herader. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2022-05-19 16:32:32 * @LastEditors: Ricardo Lu * @LastEditTime: 2022-05-19 16:49:07 */ #define PACKAGE "rloverlay" #define PACKAGE_VERSION "1.0.0" #define PACKAGE_LICENSE "GPL" #define PACKAGE_SUMMARY "Simple open-source GStreamer plugin for overlay." #define PACKAGE_ORIGIN "https://github.com/gesanqiu/gstreamer-example/tree/main/application_develop/custom_user_plugin" ================================================ FILE: application_develop/custom_user_plugin/gstoverlay.c ================================================ /* * @Description: GStreamer overlay plugin. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2022-05-19 15:43:22 * @LastEditors: Ricardo Lu * @LastEditTime: 2022-05-21 17:14:03 */ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include #include #include "gstoverlay.h" #define GST_CAT_DEFAULT overlay_debug GST_DEBUG_CATEGORY_STATIC (overlay_debug); #define gst_overlay_parent_class parent_class G_DEFINE_TYPE (GstOverlay, gst_overlay, GST_TYPE_VIDEO_FILTER); #undef GST_VIDEO_SIZE_RANGE #define GST_VIDEO_SIZE_RANGE "(int) [ 1, 32767]" #define GST_VIDEO_FORMATS "{ NV12, NV21 }" // Default value of plugin properties #define DEFAULT_PROP_OVERLAY_TEXT NULL #define DEFAULT_PROP_OVERLAY_BBOX_X 10 #define DEFAULT_PROP_OVERLAY_BBOX_Y 10 #define DEFAULT_PROP_OVERLAY_TEXT_COLOR 0x00FF00FF // green #define DEFAULT_PROP_OVERLAY_TEXT_THICK 8 #define DEFAULT_PROP_OVERLAY_BBOX_LABEL NULL #define DEFAULT_PROP_OVERLAY_BBOX_X 10 #define DEFAULT_PROP_OVERLAY_BBOX_Y 18 #define DEFAULT_PROP_OVERLAY_BBOX_WIDTH 100 #define DEFAULT_PROP_OVERLAY_BBOX_HEIGHT 100 #define DEFAULT_PROP_OVERLAY_BBOX_COLOR 0xFF0000FF // red #define DEFAULT_PROP_OVERLAY_BBOX_THICK 8 /* Supported GST properties * PROP_OVERLAY_TEXT - overlays user defined texts * PROP_OVERLAY_TEXT_COLOR - overlays text color * PROP_OVERLAY_TEXT_POSITION - user defined text position, e.g: (left,top) * PROP_OVERLAY_BBOX - overlays user defined bounding box position, e.g: (left,top,width,height) * PROP_OVERLAY_BBOX_COLOR - overlays bounding box color */ enum { PROP_0, PROP_OVERLAY_TEXT, PROP_OVERLAY_TEXT_COLOR, PROP_OVERLAY_TEXT_POSITION, PROP_OVERLAY_TEXT_THICK, PROP_OVERLAY_BBOX, PROP_OVERLAY_BBOX_COLOR, PROP_OVERLAY_BBOX_THICK }; static void gst_overlay_set_property(GObject *object, guint prop_id, const GValue *value, GParamSpec *pspec) { GstOverlay *gst_overlay = GST_OVERLAY(object); const gchar *propname = g_param_spec_get_name(pspec); GstState state = GST_STATE(gst_overlay); if (!OVERLAY_IS_PROPERTY_MUTABLE_IN_CURRENT_STATE(pspec, state)) { GST_WARNING ("Property '%s' change not supported in %s state!", propname, gst_element_state_get_name (state)); return; } GST_OBJECT_LOCK(gst_overlay); switch (prop_id) { case PROP_OVERLAY_TEXT: gst_overlay->text = g_strdup(g_value_get_strint(value)); break; case PROP_OVERLAY_TEXT_COLOR: gst_overlay->text_color = g_value_get_uint(value); break; case PROP_OVERLAY_TEXT_POSITION: if (gst_value_array_get_size(value) != 2) { GST_DEBUG_OBJECT(gst_overlay, "text-position property is not set. Use default values."); break; } gst_overlay->usr_text->left = g_value_get_int(gst_value_array_get_value(value, 0)); gst_overlay->usr_text->top = g_value_get_int(gst_value_array_get_value(value, 1)); break; case PROP_OVERLAY_TEXT_THICK: gst_overlay->text_thick = g_value_get_uint(value); break; case PROP_OVERLAY_BBOX: if (gst_value_array_get_size(value) != 4) { GST_DEBUG_OBJECT(gst_overlay, "bbox property is not set. Use default values."); break; } gst_overlay->usr_bbox->bounding_box.x = g_value_get_int(gst_value_array_get_value(value, 0)); gst_overlay->usr_bbox->bounding_box.y = g_value_get_int(gst_value_array_get_value(value, 1)); gst_overlay->usr_bbox->bounding_box.w = g_value_get_int(gst_value_array_get_value(value, 2)); gst_overlay->usr_bbox->bounding_box.h = g_value_get_int(gst_value_array_get_value(value, 3)); break; case PROP_OVERLAY_BBOX_COLOR: gst_overlay->usr_bbox->bbox_color = g_value_get_uint(value); break; case PROP_OVERLAY_BBOX_THICK: gst_overlay->usr_bbox->bbox_thick = g_value_get_uint(value); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec); break; } GST_OBJECT_UNLOCK(gst_overlay); } static void gst_overlay_get_property(GObject * object, guint prop_id, GValue * value, GParamSpec * pspec) { GstOverlay *gst_overlay = GST_OVERLAY(object); GST_OBJECT_LOCK(gst_overlay); switch (prop_id) { case PROP_OVERLAY_TEXT: break; case PROP_OVERLAY_TEXT_COLOR: g_value_set_uint(value, gst_overlay->usr_text->text_color); break; case PROP_OVERLAY_TEXT_POSITION: /* TO-DO: bbox to string*/ GValue val = G_VALUE_INIT; g_value_init (&val, G_TYPE_INT); g_value_set_int (&val, gst_overlay->usr_text->left); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->usr_text->top); gst_value_array_append_value (value, &val); break; case PROP_OVERLAY_TEXT_THICK: g_value_set_uint(value, gst_overlay->usr_text->thick); break; case PROP_OVERLAY_BBOX: /* TO-DO: bbox to string*/ GValue val = G_VALUE_INIT; g_value_init (&val, G_TYPE_INT); g_value_set_int (&val, gst_overlay->usr_bbox->bounding_box.x); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->usr_bbox->bounding_box.y); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->usr_bbox->bounding_box.w); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->usr_bbox->bounding_box.h); gst_value_array_append_value (value, &val); break; case PROP_OVERLAY_BBOX_COLOR: g_value_set_uint(value, gst_overlay->usr_bbox->bbox_color); break; case PROP_OVERLAY_BBOX_THICK: g_value_set_uint(value, gst_overlay->usr_bbox->thick); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec); break; } GST_OBJECT_UNLOCK(gst_overlay); } static GstStaticCaps gst_overlay_format_caps = GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (GST_VIDEO_FORMATS) ";" GST_VIDEO_CAPS_MAKE_WITH_FEATURES ("ANY", GST_VIDEO_FORMATS)); static void gst_overlay_finalize(GObject *object) { GstOverlay *gst_overlay = GST_OVERLAY(object); if (gst_overlay->overlay) { free(gst_overlay->overlay); gst_overlay->overlay = NULL; } if (gst_overlay->usr_text) { if (gst_overlay->usr_text->text) { free(gst_overlay->usr_text->text); gst_overlay->usr_text->text = NULL; } free(gst_overlay->usr_text); gst_overlay->usr_text = NULL; } if (gst_overlay->usr_bbox) { free(gst_overlay->usr_bbox); gst_overlay->usr_bbox = NULL; } g_mutex_clear (&gst_overlay->lock); G_OBJECT_CLASS (parent_class)->finalize (G_OBJECT (gst_overlay)); } static gboolean gst_overlay_set_info(GstVideoFilter *filter, GstCaps* in, GstVideoInfo *in_info GstCaps *out, GstVideoInfo *out_info) { GstOverlay *gst_overlay = GST_OVERLAY(filter); BufferFormat new_format; gst_base_transform_set_passthrough(GST_BASE_TRANSFORM (filter), FALSE); gst_overlay->width = GST_VIDEO_INFO_WIDTH(in_info); gst_overlay->height = GST_VIDEO_INFO_HEIGHT(in_info); switch (GST_VIDEO_INFO_FORMAT(in_info)) { // GstVideoFormat case GST_VIDEO_FORMAT_NV12: new_format = NV12; break; case GST_VIDEO_FORMAT_NV21: new_format = NV21; break; default: GST_ERROR_OBJECT(gst_overlay, "Unhandled gst format: %d", GST_VIDEO_INFO_FORMAT(in_info)); return FALSE; } if (gst_overlay->overlay && gst_overlay->format == new_format) { GST_DEBUG_OBJECT(gst_overlay, "Overlay already initialized"); return TRUE; } if (gst_overlay->overlay) { free(gst_overlay->overlay); } gst_overlay->format = new_format; gst_overlay->overlay = (Overlay*)malloc(sizeof(Overlay)); int32_t ret = gst_overlay->overlay->Init(gst_overlay->format); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay init failed! Format: %u", (guint)gst_overlay->format); free(gst_overlay->overlay); gst_overlay->overlay = NULL; return FALSE; } return TRUE; } static GstFlowReturn gst_overlay_transform_frame_ip(GstVideoFilter *filter, GstVideoFrame *frame) { GstOverlay *gst_overlay = GST_OVERLAY_CAST(filter); gboolean res = TRUE; if (!gst_overlay->overlay) { GST_ERROR_OBJECT(gst_overlay, "failed: overlay not initialized"); return GST_FLOW_ERROR; } /* TO-DO: overlay */ if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply failed!"); return GST_FLOW_ERROR; } return GST_FLOW_OK; } static void gst_overlay_init(GstOverlay *gst_overlay) { gst_overlay->overlay = NULL; gst_overlay->text_color = DEFAULT_PROP_OVERLAY_TEXT_COLOR; gst_overlay->bbox_color = DEFAULT_PROP_OVERLAY_BBOX_COLOR; gst_overlay->usr_text = (GstOverlayText*) malloc(sizeof(GstOverlayText)); gst_overlay->usr_text->text = DEFAULT_PROP_OVERLAY_TEXT; gst_overlay->usr_text->color = DEFAULT_PROP_OVERLAY_TEXT_COLOR; gst_overlay->usr_text->left = DEFAULT_PROP_OVERLAY_BBOX_X; gst_overlay->usr_text->top = DEFAULT_PROP_OVERLAY_BBOX_Y; gst_overlay->usr_text->thick = DEFAULT_PROP_OVERLAY_TEXT_THICK; gst_overlay->usr_bbox = (GstOverlayBBox*) malloc(sizeof(GstOverlayBBox)); gst_overlay->usr_bbox->label = DEFAULT_PROP_OVERLAY_BBOX_LABEL; gst_overlay->usr_bbox->color = DEFAULT_PROP_OVERLAY_BBOX_COLOR; gst_overlay->usr_bbox->thick = DEFAULT_PROP_OVERLAY_BBOX_THICK; gst_overlay->usr_bbox->bounding_box.x = DEFAULT_PROP_OVERLAY_BBOX_X; gst_overlay->usr_bbox->bounding_box.y = DEFAULT_PROP_OVERLAY_BBOX_Y; gst_overlay->usr_bbox->bounding_box.w = DEFAULT_PROP_OVERLAY_BBOX_WIDTH; gst_overlay->usr_bbox->bounding_box.h = DEFAULT_PROP_OVERLAY_BBOX_HEIGHT; g_mutex_init(&gst_overlay->lock); GST_DEBUG_CATEGORY_INIT(overlay_debug, "rloverlay", 0, "Simple overlay"); } static void gst_overlay_class_init(GstOverlayClass *klass) { GObjectClass *gobject = G_OBJECT_CLASS(klass); GstElementClass *element = GST_ELEMENT_CLASS(klass); GstVideoFilterClass *filter = GST_VIDEO_FILTER_CLASS(klass); /* define virtual function pointers */ gobject->set_property = GST_DEBUG_FUNCPTR(gst_overlay_set_property); gobject->get_property = GST_DEBUG_FUNCPTR(gst_overlay_get_property); gobject->finalize = GST_DEBUG_FUNCPTR(gst_overlay_finalize); /* define properties */ g_object_class_install_property(object_class, PROP_OVERLAY_TEXT, g_param_spec_string ("text", "Overlay text.", "Renders text on top of video stream.", DEFAULT_PROP_OVERLAY_TEXT, G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING)); g_object_class_install_property (gobject, PROP_OVERLAY_TEXT_COLOR, g_param_spec_uint ("text-color", "Text color", "Text overlay color in RGBA format.", 0xFF, G_MAXUINT, DEFAULT_PROP_OVERLAY_TEXT_COLOR, G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); g_object_class_install_property(object_class, PROP_OVERLAY_TEXT_POSITION, g_param_spec_string ("text-position", "Text position.", "Renders text on top of video stream at specified position.", "(10,10)", G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING)); g_object_class_install_property (gobject, PROP_OVERLAY_TEXT_THICK, g_param_spec_uint ("text-thcik", "Text thick", "Text overlay thick.", 0, 50, DEFAULT_PROP_OVERLAY_TEXT_THICK, G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); g_object_class_install_property(object_class, PROP_OVERLAY_BBOX, g_param_spec_string ("bbox", "Overlay bbox.", "Renders bbox on top of video stream at specified position.", "(10,10,100,100)", G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING)); g_object_class_install_property (gobject, PROP_OVERLAY_BBOX_COLOR, g_param_spec_uint ("bbox-color", "BBox color", "Bounding box overlay color in RGBA format.", 0xFF, G_MAXUINT, DEFAULT_PROP_OVERLAY_BBOX_COLOR, G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); g_object_class_install_property (gobject, PROP_OVERLAY_BBOX_THICK, g_param_spec_uint ("bbox-thick", "BBox thick", "Bounding box overlay thick.", 0, 50, DEFAULT_PROP_OVERLAY_BBOX_THICK, G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS)); gst_element_class_set_static_metadata(gobject, "An example plugin", "Overlay", "Simple open-source GStreamer plugin for overlay.", "Ricardo Lu"); /* define pads */ gst_element_class_add_pad_template(element, gst_overlay_sink_template()); gst_element_class_add_pad_template(element, gst_overlay_src_template()); filter->set_info = GST_DEBUG_FUNCPTR(gst_overlay_set_info); filter->transform_frame_ip = GST_DEBUG_FUNCPTR(gst_overlay_transform_frame_ip); } static gboolean plugin_init(GstPlugin * plugin) { return gst_element_register (plugin, "rloverlay", GST_RANK_PRIMARY, GST_TYPE_OVERLAY); } GST_PLUGIN_DEFINE ( GST_VERSION_MAJOR, GST_VERSION_MINOR, rloverlay, "Simple open-source GStreamer plugin for overlay.", plugin_init, PACKAGE_VERSION, PACKAGE_LICENSE, PACKAGE_SUMMARY, PACKAGE_ORIGIN ) ================================================ FILE: application_develop/custom_user_plugin/gstoverlay.h ================================================ /* * @Description: GStreamer overlay plugin. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2022-05-19 15:43:22 * @LastEditors: Ricardo Lu * @LastEditTime: 2022-05-21 16:50:48 */ #ifndef __GST_OVERLAY_H__ #define __GST_OVERLAY_H__ #include #include #include G_BEGIN_DECLS #define GST_TYPE_OVERLAY (gst_overlay_get_type()) #define GST_OVERLAY(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), GST_TYPE_OVERLAY, GstOverlay)) #define GST_OVERLAY_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), GST_TYPE_OVERLAY, GstOverlayClass)) #define GST_IS_OVERLAY(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), GST_TYPE_OVERLAY)) #define GST_IS_OVERLAY_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), GST_TYPE_OVERLAY)) #define GST_OVERLAY_CAST(obj) ((GstOverlay *)(obj)) typedef enum _BufferFormat { NV12, NV21 }BufferFormat; typedef struct _GstOverlay GstOverlay; typedef struct _GstOverlayClass GstOverlayClass; typedef struct _GstOverlayText GstOverlayText; typedef struct _GstOverlayBBox GstOverlayBBox; struct _GstOverlay { GstVideoFilter parent; Overlay *overlay; BufferFormat format; guint width; guint height; GMutex lock; /* User specified color */ guint bbox_color; guint text_color; /* User specified overlay */ GstOverlayText *usr_text; GstOverlayBBox *usr_bbox; }; struct _GstOverlayClass { GstVideoFilterClass parent; }; /* GstOverlayText - parameters for text overlay * text: user text * color: RGBA format overlay color * thick: user text box overlay thcik * left: left coordinate of text * top: top coordinate of text */ struct _GstOverlayText { gchar *text; guint color; guint thick; guint left; guint top; }; /* GstOverlayBBox - parameters for user bounding box overlay * label: bounding box label * boundind_box: boundind box rectangle * color: RGBA format overlay color * thick: bounding box overlay thcik */ struct _GstOverlayBBox { gchar *label; GstVideoRectangle bounding_box; guint color; guint thick; }; G_GNUC_INTERNAL GType gst_overlay_get_type(void); #define IS_OVERLAY_PROPERTY_MUTABLE_IN_CURRENT_STATE(pspec, state) \ ((pspec->flags & GST_PARAM_MUTABLE_PLAYING) ? (state <= GST_STATE_PLAYING) \ : ((pspec->flags & GST_PARAM_MUTABLE_PAUSED) ? (state <= GST_STATE_PAUSED) \ : ((pspec->flags & GST_PARAM_MUTABLE_READY) ? (state <= GST_STATE_READY) \ : (state <= GST_STATE_NULL)))) G_END_DECLS #endif /* __GST_OVERLAY_H__ */ ================================================ FILE: application_develop/uridecodebin/CMakeLists.txt ================================================ # created by Ricardo Lu in 09/01/2021 cmake_minimum_required(VERSION 3.10) project(uridecoderbin) set(CMAKE_CXX_STANDARD 11) set(OpenCV_DIR "/opt/thundersoft/opencv-4.2.0/lib/cmake/opencv4") find_package(OpenCV REQUIRED) include(FindPkgConfig) pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) pkg_check_modules(GLIB REQUIRED glib-2.0) pkg_check_modules(GFLAGS REQUIRED gflags) include_directories( ${PROJECT_SOURCE_DIR}/inc ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ${GLIB_INCLUDE_DIRS} ${GFLAGS_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ${GLIB_LIBRARY_DIRS} ${GFLAGS_LIBRARY_DIRS} ${OpenCV_LIBRARY_DIRS} ) add_executable(${PROJECT_NAME} src/VideoPipeline.cpp src/main.cpp ) target_link_libraries(${PROJECT_NAME} ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ${GLIB_LIBRARIES} ${GFLAGS_LIBRARIES} ${OpenCV_LIBRARIES} ) ================================================ FILE: application_develop/uridecodebin/README.md ================================================ # uridecodebin uridecodebin是属于Playback部分的内容,内部集成了一系列自动化操作,可以有效缩短pipeline的元素,但是整个pipeline的构建过程对用户并不透明,因此不能很好的控制内部元素的链接,这需要用户做一定的取舍。 **教程地址:[uridecodebin](https://ricardolu.gitbook.io/gstreamer/application-development/uridecodebin)** 参考文档: - [uridecodebin](https://gstreamer.freedesktop.org/documentation/playback/uridecodebin.html?gi-language=c#uridecodebin-page) - [decodebin](https://gstreamer.freedesktop.org/documentation/playback/decodebin.html?gi-language=c#decodebin-page) - [Playback tutorial 3: Short-cutting the pipeline](https://gstreamer.freedesktop.org/documentation/tutorials/playback/short-cutting-the-pipeline.html#) - [Basic tutorial 3: Dynamic pipelines](https://thiblahute.github.io/GStreamer-doc/tutorials/basic/dynamic-pipelines.html?gi-language=c) ## build & run ```shell cmake -H. -Bbuild/ cd build make # filesrc ./uridecoderbin --srcuri file:///user/local/gstreamer-example/application_develop/video.mp4 # rtspsrc # notice that pipeline add audio control, so rtsp need to support audio information # or notes all the relative codes, or else the pipeline can't run. ./uridecoderbin --srcuri rtsp://admin:1234@10.0.23.227:554 # souphttpsrc ./uridecoderbin --srcuri https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm ``` **注:**`uridecodebin`的`uri`属性必须是绝对路径。 ================================================ FILE: application_develop/uridecodebin/doc/uridecodebin.md ================================================ # uridecodebin `uridecodebin`能够将URI数据解码成raw media,它会自动选择一个能处理uri数据的source element并将其和`decodebin`链接。 **Github: [uridecodebin](https://github.com/gesanqiu/gstreamer-example/tree/main/application_develop/uridecodebin)** ## uri ```c++ bool Create (void) { // ... g_object_set (G_OBJECT (m_source), "uri", m_config.src.c_str(), NULL); // ... } ``` ## signals ### source-setup ```c++ static void cb_uridecodebin_source_setup ( GstElement* pipeline, GstElement* source, gpointer user_data) { VideoPipeline* vp = reinterpret_cast (user_data); LOG_INFO_MSG ("cb_uridecodebin_source_setup called"); /* Configure rtspsrc if (g_object_class_find_property (G_OBJECT_GET_CLASS (source), "latency")) { LOG_INFO_MSG ("cb_uridecodebin_source_setup set %d latency", vp->m_config.rtsp_latency); g_object_set (G_OBJECT (source), "latency", vp->m_config.rtsp_latency, NULL); } */ /* Configure appsrc GstCaps *m_sCaps; src_Caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.src_format.c_str(), "width", G_TYPE_INT, m_config.src_width, "height", G_TYPE_INT, m_config.src_height, NULL); g_object_set (G_OBJECT(source), "caps", src_Caps, NULL); g_signal_connect (source, "need-data", G_CALLBACK (start_feed), data); g_signal_connect (source, "enough-data", G_CALLBACK (stop_feed), data); gst_caps_unref (src_Caps); */ } ``` `uridecodebin`会分析`uri`属性的值,然后选择合适的`srouce element`,这个`uri`值必须是完整的绝对路径,由source类型开始。 ### child-added ```c++ static void cb_decodebin_child_added ( GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { LOG_INFO_MSG ("cb_decodebin_child_added called"); VideoPipeline* vp = reinterpret_cast (user_data); LOG_INFO_MSG ("Element '%s' added to decodebin", name); if (g_strrstr (name, "qtdemux") == name) { vp->m_qtdemux = reinterpret_cast (object); } if ((g_strrstr (name, "h264parse") == name)) { vp->m_h264parse = reinterpret_cast (object); } if (g_strrstr (name, "qtivdec") == name) { vp->m_decoder = reinterpret_cast (object); g_object_set (object, "turbo", vp->m_config.turbo, NULL); g_object_set (object, "skip-frames", vp->m_config.skip_frame, NULL); } } static void cb_uridecodebin_child_added ( GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_child_added called"); LOG_INFO_MSG ("Element '%s' added to uridecodebin", name); VideoPipeline* vp = reinterpret_cast (user_data); if (g_strrstr (name, "decodebin") == name) { g_signal_connect (G_OBJECT (object), "child-added", G_CALLBACK (cb_decodebin_child_added), vp); } } ``` 通过打印log可以看出`uridecodebin`初始化过程中自动添加到pipeline中的所有GstElement: ``` filesrc: qtdemux, multiqueue, h264parse/h265parse, capfilter, aacparse, avdec_aac, qtivdec rtspsrc: rtph264depay/rtph265depay, h264parse/h265parse, capfilter, qtivdec ``` 在`uridecodebin`只会添加`decodebin`一个GstElement,上述的GstElement均由`decodebin`构建,因此除了`uridecodebin`的`child-added`回调,还在其回调中添加了一个`decodebin`的`child-added`回调,用于设置`decodebin`构建的GstElement的属性。 在[build pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline)中提到关于`filesrc`插件的解复用,需要手动进行link,但这个link在实际测试过程中有`decodebin`自动完成,我尝试手动去做link,由于无法获取到`h264parse`的`sink pad`程序抛出了段错误。我打印了这时候`vp->m_h264parse`的指针地址,是一个初始值,说明这时候还未初始化`vp->m_h264parse`。 ### pad-added ```c++ static void cb_uridecodebin_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_pad_added called"); GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; GstPad* v_sinkpad = NULL; GstPad* a_sinkpad = NULL; VideoPipeline* vp = reinterpret_cast (user_data); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (g_str_has_prefix (new_pad_type, "video/x-raw")) { LOG_INFO_MSG ("Linking video/x-raw"); /* Attempt the link */ v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_videoConv), "sink"); ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link video source with waylandsink"); goto exit; } } else if (g_str_has_prefix (new_pad_type, "audio/x-raw")) { LOG_INFO_MSG ("Linking audio/x-raw"); a_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_audioConv), "sink"); ret = gst_pad_link (new_pad, a_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link audio source and audioconvert"); goto exit; } } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ if (v_sinkpad) gst_object_unref (v_sinkpad); if (a_sinkpad) gst_object_unref (a_sinkpad); } ``` 将`uridecodebin`的`src pad`和`waylandsin`的`sink pad`连接起来,由于这时是解码后的数据,因此`caps`为`video/x-raw`或`audio/x-raw`类型。 **在上文中提到关于`qtdemux`解复用器的链接问题,事实上`uridecodebin`内部会自动处理好这部分链接,并且为所有能够解析的数据类型创建`src-pad`,每创建一种类型的`src-pad`,`pad-added`回调就会触发一次,用户需要自行完成`uridecodebin`的这些`src-pad`与后续GstElement的链接。**可以结合[Build Pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline#gst_element_link_pads)中关于pad-link的内容一起理解。 在上述代码中我将`video/x-raw`类型的数据和`videoconvert`插件链接,将`audio/x-raw`与`audioconvert`链接,`videoconvert`和`audioconvert`几乎是万能的格式转换插件能够提高代码的可移植性,使得代码能够在各种平台上正常运行,但切记这两者的转换均是使用CPU完成,因此十分消耗性能,在推流中使用会造成极大的延迟。 ================================================ FILE: application_develop/uridecodebin/inc/Common.h ================================================ /* * @Description: Common Utils. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:24:25 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 13:31:00 */ #pragma once #include #include #include #include #include #include #include #include #define LOG_ERROR_MSG(msg, ...) \ g_print("** ERROR: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_INFO_MSG(msg, ...) \ g_print("** INFO: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) #define LOG_WARN_MSG(msg, ...) \ g_print("** WARN: <%s:%s:%d>: " msg "\n", __FILE__, __func__, __LINE__, ##__VA_ARGS__) // callback functions typedef std::function, void*)> SinkPutDataFunc; typedef std::function(void*)> SrcGetDataFunc; typedef std::function(void*)> ProbeGetResultFunc; typedef std::function&)> ProcDataFunc; ================================================ FILE: application_develop/uridecodebin/inc/DoubleBufferCache.h ================================================ /* * @Description: Double Buffer Cache Implement. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-29 08:51:01 * @LastEditors: Ricardo Lu * @LastEditTime: 2021-08-29 12:39:35 */ #pragma once #include #include #include #include /** @brief Shared-buffer cache manager. * * */ template class DoubleBufCache { public: /** @brief constructor * @param[in] notify_func When a new buffer is fed, it triggers the function handle. * */ DoubleBufCache(std::function notify_func = std::function{nullptr}) noexcept : swap_ready(false) { this->notify_func = notify_func; } /** @brief deconstructor * */ ~DoubleBufCache() noexcept { if (!debug_info.empty() ) { printf("DoubleBufCache %s destroyed.", debug_info.c_str()); } } /** @brief Put the latest buffer into cache queue to be processed. * * Giving up control of previous front buffer. * @param[in] The latest buffer. * */ void feed(std::shared_ptr pending) { if (nullptr == pending.get()) { throw "ERROR: feed an empty buffer to DoubleBufCache"; } swap_mtx.lock(); front_sp = pending; swap_mtx.unlock(); swap_ready = true; if (notify_func) { notify_func(); } return; } /** @brief Get the front buffer. * @return Front buffer. * */ std::shared_ptr front() noexcept { return front_sp; } /** @brief Fetch the shared back buffer. * @return Back buffer. * */ std::shared_ptr fetch() noexcept { if (swap_ready) { swap_mtx.lock(); back_sp = front_sp; swap_mtx.unlock(); swap_ready = false; } return back_sp; } private: //! Notification function will be called, if a new buffer fed. std::function notify_func; //! The buffer cache can be swapped if the flag is equal to true. std::atomic swap_ready; //! Swapping mutex lock for thread safety. std::mutex swap_mtx; //! Front buffer for previous results saving. std::shared_ptr front_sp; //! Back buffer to be fetched. std::shared_ptr back_sp; public: //! Indicate the name of an instantiated object for debug. std::string debug_info; }; ================================================ FILE: application_develop/uridecodebin/inc/VideoPipeline.h ================================================ /* * @Description: GstPipeline common header. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 08:11:39 * @LastEditors: Please set LastEditors * @LastEditTime: 2021-09-03 23:22:24 */ #pragma once #include "Common.h" typedef struct _VideoPipelineConfig { std::string src; /*---------------qtivdec---------------*/ bool turbo; bool skip_frame; }VideoPipelineConfig; class VideoPipeline { public: VideoPipeline (const VideoPipelineConfig& config); bool Create (void); bool Start (void); bool Pause (void); bool Resume (void); void Destroy (void); ~VideoPipeline (void); public: VideoPipelineConfig m_config; GstElement* m_gstPipeline; GstElement* m_source; GstElement* m_demux; GstElement* m_h264parse; GstElement* m_vdecoder; GstElement* m_adecoder; GstElement* m_videoConv; GstElement* m_display; GstElement* m_audioConv; GstElement* m_audioReSample; GstElement* m_player; }; /* * gst-launch-1.0 uridecodebin uri=file:///absolute/path/video.mp4 ! waylandsink * or * gst-launch-1.0 filesrc location=video.mp4 ! qtdemux ! \ * multiqueue ! h264parse ! capfilter ! qtivdec ! waylandsink */ /* * gst-launch-1.0 \ * uridecodebin uri=file:///absolute/path/video.mp4 demux. ! \ * queue ! videoconvert ! waylandysink demux. ! queue ! audioconvert ! pulsesink volume=1 * */ ================================================ FILE: application_develop/uridecodebin/src/VideoPipeline.cpp ================================================ /* * @Description: Implement of VideoPipeline. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-27 12:01:39 * @LastEditors: Please set LastEditors * @LastEditTime: 2021-09-03 23:54:28 */ #include "VideoPipeline.h" /* * This function is called when decodebin has created the decode element, * filesrc(video only): qtdemux, multiqueue, h264parse/h265parse, capfilter, qtivdec * rtspsrc(video only): rtph264depay/rtph265depay, h264parse/h265parse, capfilter, qtivdec * souphttpsrc(both video/audio): queue, matroskademux, multiqueue, qtivdec, vorbisdec * so we have chance to configure it. */ static void cb_decodebin_child_added ( GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { LOG_INFO_MSG ("cb_decodebin_child_added called"); VideoPipeline* vp = reinterpret_cast (user_data); LOG_INFO_MSG ("Element '%s' added to decodebin", name); { /* uridecodebin create demux automatically and expose them to different src-pad, developor should link them by themselves */ if (g_strrstr (name, "qtdemux") == name) { vp->m_demux = reinterpret_cast (object); } if ((g_strrstr (name, "h264parse") == name)) { vp->m_h264parse = reinterpret_cast (object); } } if (g_strrstr (name, "qtivdec") == name) { vp->m_vdecoder = reinterpret_cast (object); g_object_set (object, "turbo", vp->m_config.turbo, NULL); g_object_set (object, "skip-frames", vp->m_config.skip_frame, NULL); } } /* * This function is called when uridecodebin has created * the source element: filesrc/rtspsrc/appsrc * so we have chance to configure it. */ static void cb_uridecodebin_source_setup ( GstElement* pipeline, GstElement* source, gpointer user_data) { VideoPipeline* vp = reinterpret_cast (user_data); LOG_INFO_MSG ("cb_uridecodebin_source_setup called"); /* Configure rtspsrc if (g_object_class_find_property (G_OBJECT_GET_CLASS (source), "latency")) { LOG_INFO_MSG ("cb_uridecodebin_source_setup set %d latency", vp->m_config.rtsp_latency); g_object_set (G_OBJECT (source), "latency", vp->m_config.rtsp_latency, NULL); } */ /* Configure appsrc GstCaps *m_sCaps; src_Caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, m_config.src_format.c_str(), "width", G_TYPE_INT, m_config.src_width, "height", G_TYPE_INT, m_config.src_height, NULL); g_object_set (G_OBJECT(source), "caps", src_Caps, NULL); g_signal_connect (source, "need-data", G_CALLBACK (start_feed), data); g_signal_connect (source, "enough-data", G_CALLBACK (stop_feed), data); gst_caps_unref (src_Caps); */ } static void cb_uridecodebin_pad_added ( GstElement* src, GstPad* new_pad, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_pad_added called"); GstPadLinkReturn ret; GstCaps* new_pad_caps = NULL; GstStructure* new_pad_struct = NULL; const gchar* new_pad_type = NULL; GstPad* v_sinkpad = NULL; GstPad* a_sinkpad = NULL; VideoPipeline* vp = reinterpret_cast (user_data); new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (g_str_has_prefix (new_pad_type, "video/x-raw")) { LOG_INFO_MSG ("Linking video/x-raw"); /* Attempt the link */ v_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_videoConv), "sink"); ret = gst_pad_link (new_pad, v_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link video source with waylandsink"); goto exit; } } else if (g_str_has_prefix (new_pad_type, "audio/x-raw")) { LOG_INFO_MSG ("Linking audio/x-raw"); a_sinkpad = gst_element_get_static_pad ( reinterpret_cast (vp->m_audioConv), "sink"); ret = gst_pad_link (new_pad, a_sinkpad); if (GST_PAD_LINK_FAILED (ret)) { LOG_ERROR_MSG ("fail to link audio source and audioconvert"); goto exit; } } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ if (v_sinkpad) gst_object_unref (v_sinkpad); if (a_sinkpad) gst_object_unref (a_sinkpad); } static void cb_uridecodebin_child_added ( GstChildProxy* child_proxy, GObject* object, gchar* name, gpointer user_data) { LOG_INFO_MSG ("cb_uridecodebin_child_added called"); LOG_INFO_MSG ("Element '%s' added to uridecodebin", name); VideoPipeline* vp = reinterpret_cast (user_data); if (g_strrstr (name, "decodebin") == name) { g_signal_connect (G_OBJECT (object), "child-added", G_CALLBACK (cb_decodebin_child_added), vp); } } VideoPipeline::VideoPipeline (const VideoPipelineConfig& config) { m_config = config; } VideoPipeline::~VideoPipeline () { Destroy (); } bool VideoPipeline::Create (void) { if (!(m_gstPipeline = gst_pipeline_new ("video-pipeline"))) { LOG_ERROR_MSG ("Failed to create pipeline named video-pipeline"); goto exit; } gst_pipeline_set_auto_flush_bus (GST_PIPELINE (m_gstPipeline), true); if (!(m_source = gst_element_factory_make ("uridecodebin", "src"))) { LOG_ERROR_MSG ("Failed to create element uridecodebin named src"); goto exit; } g_object_set (G_OBJECT (m_source), "uri", m_config.src.c_str(), NULL); g_signal_connect (G_OBJECT (m_source), "source-setup", G_CALLBACK ( cb_uridecodebin_source_setup), reinterpret_cast (this)); g_signal_connect (G_OBJECT (m_source), "pad-added", G_CALLBACK ( cb_uridecodebin_pad_added), reinterpret_cast (this)); g_signal_connect (G_OBJECT (m_source), "child-added", G_CALLBACK ( cb_uridecodebin_child_added), reinterpret_cast (this)); gst_bin_add_many (GST_BIN (m_gstPipeline), m_source, NULL); if (!(m_videoConv = gst_element_factory_make ("videoconvert", "videoconv"))) { LOG_ERROR_MSG ("Failed to create element videoconvert named videoconvert"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_videoConv, NULL); if (!(m_display = gst_element_factory_make ("waylandsink", "display"))) { LOG_ERROR_MSG ("Failed to create element waylandsink named display"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_display, NULL); if (!gst_element_link_many (m_videoConv, m_display, NULL)) { LOG_ERROR_MSG ("Failed to link videoconvert->waylandsink"); goto exit; } if (!(m_audioConv = gst_element_factory_make ("audioconvert", "audioconv"))) { LOG_ERROR_MSG ("Failed to create element audioconvert named audioconv"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_audioConv, NULL); if (!(m_audioReSample = gst_element_factory_make ("audioresample", "resample"))) { LOG_ERROR_MSG ("Failed to create element audioresample named resample"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_audioReSample, NULL); if (!(m_player = gst_element_factory_make ("pulsesink", "player"))) { LOG_ERROR_MSG ("Failed to create element plusesink named player"); goto exit; } gst_bin_add_many (GST_BIN (m_gstPipeline), m_player, NULL); g_object_set (G_OBJECT (m_player), "volume", 1.0, NULL); if (!gst_element_link_many (m_audioConv, m_audioReSample, m_player, NULL)) { LOG_ERROR_MSG ("Failed to link audioconvert->audioresample->pulsesink"); goto exit; } return true; exit: LOG_ERROR_MSG ("Failed to create video pipeline"); return false; } bool VideoPipeline::Start (void) { if (GST_STATE_CHANGE_FAILURE == gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING)) { LOG_ERROR_MSG ("Failed to set pipeline to playing state"); return false; } return true; } bool VideoPipeline::Pause (void) { GstState state, pending; LOG_INFO_MSG ("StopPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PAUSED) { return true; } else if (state == GST_STATE_PLAYING) { gst_element_set_state (m_gstPipeline, GST_STATE_PAUSED); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } bool VideoPipeline::Resume (void) { GstState state, pending; LOG_INFO_MSG ("StartPipeline called"); if (GST_STATE_CHANGE_ASYNC == gst_element_get_state ( m_gstPipeline, &state, &pending, 5 * GST_SECOND / 1000)) { LOG_WARN_MSG ("Failed to get state of pipeline"); return false; } if (state == GST_STATE_PLAYING) { return true; } else if (state == GST_STATE_PAUSED) { gst_element_set_state (m_gstPipeline, GST_STATE_PLAYING); gst_element_get_state (m_gstPipeline, &state, &pending, GST_CLOCK_TIME_NONE); return true; } else { LOG_WARN_MSG ("Invalid state of pipeline(%d)", GST_STATE_CHANGE_ASYNC); return false; } } void VideoPipeline::Destroy (void) { if (m_gstPipeline) { gst_element_set_state (m_gstPipeline, GST_STATE_NULL); gst_object_unref (m_gstPipeline); m_gstPipeline = NULL; } } ================================================ FILE: application_develop/uridecodebin/src/main.cpp ================================================ /* * @Description: Test Program. * @version: 1.0 * @Author: Ricardo Lu * @Date: 2021-08-28 09:17:16 * @LastEditors: Please set LastEditors * @LastEditTime: 2021-09-03 23:54:26 */ #include #include #include #include #include "VideoPipeline.h" #include "DoubleBufferCache.h" static GMainLoop* g_main_loop = NULL; static bool validateSrcUri (const char* name, const std::string& value) { if (!value.compare("")) { LOG_ERROR_MSG ("Source Uri required!"); return false; } int pos = value.find("//"); std::string uri_type = value.substr(0, pos - 1); std::string uri_path = value.substr(pos); if (!uri_type.compare ("file:")) { // make sure file exist. struct stat statbuf; if (!stat(uri_path.c_str(), &statbuf)) { LOG_INFO_MSG ("Found config file: %s", value.substr(pos).c_str()); return true; } } else { return true; } LOG_ERROR_MSG ("Invalid config file."); return false; } DEFINE_string (srcuri, "", "algorithm library with APIs: alg{Init/Proc/Ctrl/Fina}"); DEFINE_validator (srcuri, &validateSrcUri); int main(int argc, char* argv[]) { google::ParseCommandLineFlags (&argc, &argv, true); VideoPipelineConfig m_vpConfig; VideoPipeline *m_vp; std::ostringstream m_pipelineCmd; std::string m_strPipeline; gst_init(&argc, &argv); if (!(g_main_loop = g_main_loop_new (NULL, FALSE))) { LOG_ERROR_MSG ("Failed to new a object with type GMainLoop"); goto exit; } m_vpConfig.src = FLAGS_srcuri; m_vpConfig.turbo = true; m_vpConfig.skip_frame = true; m_vp = new VideoPipeline(m_vpConfig); if (!m_vp->Create()) { LOG_ERROR_MSG ("Pipeline Create failed: lack of elements"); goto exit; } m_vp->Start(); g_main_loop_run (g_main_loop); exit: if (g_main_loop) g_main_loop_unref (g_main_loop); if (m_vp) { m_vp->Destroy(); delete m_vp; m_vp = NULL; } google::ShutDownCommandLineFlags (); return 0; } ================================================ FILE: basic_theory/README.md ================================================ # Basic Theory [![](https://img.shields.io/badge/Author-@RucardoLu-red.svg)](https://github.com/gesanqiu)![](https://img.shields.io/badge/Version-2.0.0-blue.svg)[![](https://img.shields.io/badge/license-GPL-000000.svg)](https://opensource.org/licenses/GPL-3.0/) 这一部分将主要介绍: - GStreamer的基础知识,包含一条GStreamer Pipeline的各个组成部分; - 常用API Reference; - Basic tutorials和Playback tutorials的翻译和讲解补充。 ## 学习建议 在进行更深入的学习之前你应该先了解GStreamer的基本概念和组成,网上已经有大量的相关教程,这边仅对做简单介绍。 作为Linus的拥趸,我永远相信“talk is too much, show me your code”的正确性,因此API Reference只是在开发中用来查阅的,建议各位读者可以从Basic turorial开始,结合代码实际操作以获得更深刻的理解。 关于Basic tutorial和Playback tutorial的阅读顺序,建议如下: - Basic tutorial 10: GStreamer tools - Basic tutorial 11: Debugging tools - Basic tutorial 14: Handy elements **注:**这三篇是介绍GSteamer的基本工具和常用插件,建议和前三篇教程同时阅读。 - Basic tutorial 1: Hello world! - Basic tutorial 2: GStreamer concepts - Basic tutorial 3: Dynamic pipelines - Basic tutorial 6: Media formats and Pad Capabilities - Basic tutorial 7: Multithreading and Pad Availability - Basic tutorial 8: Short-cutting the pipeline - Playback tutorial 1: Playbin usage - Playback tutorial 2: Subtitle management - Playback tutorial 3: Short-cutting the pipeline - Basic tutorial 12: Streaming - Playback tutorial 4: Progressive streaming - Playback tutorial 7: Custom playbin sinks - Basic tutorial 9: Media information gathering - Basic tutorial 13: Playback speed - Playback tutorial 8: Hardware-accelerated video decoding ================================================ FILE: basic_theory/app_dev_manual/autoplugging.md ================================================ # Autoplugging 由于autoplugging这一概念具备充分的动态性,GStreamer可以自动拓展以支持新的数据类型而无需修改autoplugger。 ## Media types as a way to identify streams Auto plugging的核心问题是如何使用media type作为一种动态和可拓展的识别stream的方式,在之前的文章中有提到pad的capabilities是elements交互协商的一种机制,一个capability是一种media type和一系列properties的组合。 ## Media steam type detection Auto plugging在具体是线上依赖于media type detection,GStreamer的pipeline自带的typefinding机制负责实现这部分功能,当stream进入到pipeline中插件,只要stream type是未知的,就会一直读data。 - 在typefinding阶段,它会将所有实现了typefinder的plugins提供数据,当有一个typefinder识别了这个流(能够处理这种media type的数据),那么typefinder将发出一个信号。假如数据没有被任何一个plugin识别,那么进一步的media处理将停止(程序终止运行)。 - 实现了typefinding功能的plugin需要上报自身能够处理的media type和通常封装这种media type的文件格式,以及一个typefind函数。 - GStreamer提供了一个typefind插件,用户可以依赖他完成自己的auto plugging过程。当`typefind()`被调用,plugin将查看data的media type是否与已上报的media type是否匹配,如果匹配成功,plugin将会通知typefind element它识别出的media type以及置信度。当整个typefinding过程结束,假如有plugin识别了data,那么typefind element将出发`have-type`信号,否则报错。 ```c++ #include [.. my_bus_callback goes here ..] static gboolean idle_exit_loop (gpointer data) { g_main_loop_quit ((GMainLoop *) data); /* once */ return FALSE; } static void cb_typefound (GstElement *typefind, guint probability, GstCaps *caps, gpointer data) { GMainLoop *loop = data; gchar *type; type = gst_caps_to_string (caps); g_print ("Media type %s found, probability %d%%\n", type, probability); /* if (strcmp(type, "video/x-h264") == 0) { // link element gst_element_link_many(typefind, data->decoder, NULL); } */ g_free (type); /* since we connect to a signal in the pipeline thread context, we need * to set an idle handler to exit the main loop in the mainloop context. * Normally, your app should not need to worry about such things. */ g_idle_add (idle_exit_loop, loop); } gint main (gint argc, gchar *argv[]) { GMainLoop *loop; GstElement *pipeline, *filesrc, *typefind, *fakesink; GstBus *bus; /* init GStreamer */ gst_init (&argc, &argv); loop = g_main_loop_new (NULL, FALSE); /* check args */ if (argc != 2) { g_print ("Usage: %s \n", argv[0]); return -1; } /* create a new pipeline to hold the elements */ pipeline = gst_pipeline_new ("pipe"); bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); gst_bus_add_watch (bus, my_bus_callback, NULL); gst_object_unref (bus); /* create file source and typefind element */ filesrc = gst_element_factory_make ("filesrc", "source"); g_object_set (G_OBJECT (filesrc), "location", argv[1], NULL); typefind = gst_element_factory_make ("typefind", "typefinder"); g_signal_connect (typefind, "have-type", G_CALLBACK (cb_typefound), loop); fakesink = gst_element_factory_make ("fakesink", "sink"); /* setup */ gst_bin_add_many (GST_BIN (pipeline), filesrc, typefind, fakesink, NULL); gst_element_link_many (filesrc, typefind, fakesink, NULL); gst_element_set_state (GST_ELEMENT (pipeline), GST_STATE_PLAYING); g_main_loop_run (loop); /* unset */ gst_element_set_state (GST_ELEMENT (pipeline), GST_STATE_NULL); gst_object_unref (GST_OBJECT (pipeline)); return 0; } ``` ================================================ FILE: basic_theory/app_dev_manual/fundamental.md ================================================ # Building an Application 这一章节,我们将讨论GStreamer的基础概念和最常用的objects,例如elements,pads和buffers。我们将使用这些objects的可视化表示,以便能够可视化您稍后将学习构建的更复杂的pipeline。你将对GStreamer的API有一个初步印象,足以构建基本的应用。之后你将学着构建一个基础的命令行应用。 ## Initializing GStreamer 在开发GStreamer应用的过程中,可以简单的只包含`gst/gst.h`即可访问大部分库函数,除了某些特殊plugin的定制化API需要单独包含头文件。 此外,为了能够调用GStreamer库,在主程序中必须调用`gst_init(&argc, &argv)`函数完成必须的初始化工作以及命令行参数的分析。 ## Element GStreamer应用中最重要的对象是GstElement对象,element是多媒体pipeline基本的构建组件,所有的高级组件都集成自GstElement。 Gstreamer中主要有三种elements:sink element,src element,filter-like element,element的类型由其具备哪些pads决定(pad相关内容将在后续小节展开介绍)。 ![Visualisation of a source element](images/src-element.png) ![Visualisation of a filter element](images/filter-element.png) ![Visualisation of a sink element](images/sink-element.png) ### Creating a GstElement ```c++ gst_init (&argc, &argv); GstElement* element; element = gst_element_factory_make("factory name", "unique element name"); // or GstElementFactory* factory; GstElement* element; factory = gst_element_factory_find ("factory name"); element = gst_element_factory_create (factory, "unique element name"); ``` ### Using an element as a GObject 一个GstElement能够拥有一些属性,这些属性是使用标准GObject的属性实现的。每个GstElement至少继承了GObject的“name”属性,也即创建GstElement时所传递的“unique element name”。GObject为common属性提供了setter和getter,但更一般的做法是使用`g_object_set/get`。 ```c++ g_object_set(G_OBJECT(element), "propeerty name", &value, NULL); g_object_get(G_OBJECT(element), "propeerty name", &value, NULL); ``` 在属性的查询上,GStreamer提供了`gst-inspect-1.0`工具用于查询指定element的属性和简单描述。 除了属性,通常一个GstElement还提供一些GObject信号以实现灵活的**回调机制**用于pipeline和application交互。 ### More about element factories Element factories是GStreamer注册表中检索的基本单位,他们描述了所有GStreamer能够创建的插件和elements。 从上面创建GstElement对象的过程来看,可以理解为我们从一个GstElementFactory中创建了一个GstElemnt对象,但是我对于GstElementFactory的理解很粗糙,官方文档也没有详细的设计说明。这个GstElementFactory和工厂模式不太一样,在GStreamer中一个plugin就是一个工厂,或许是因为一个plugin可能由多个elements实现。 GstElementFactory最重要的特性就是它拥有其所属于的plugin所支持的pads的完整描述,而不需要将element真正加载进内存。 ### Link Elements 两个elements的连接,实际是src-pad和sink-pad之间的negotiation,因此连接需要两个pad的caps具备交集以及elements处于同一个GstBin。 ```c++ gst_element_link(); gst_elemenet_link_many(); gst_element_link_pads(); ``` ### Element States GStreamer的elements仅有四种状态,四种状态从NULL<->READY<->PAUSE<->PLAYING必须依次切换,即使越级切换状态成功也是接口内部完成了相关的操作。 `GST_STATE_NULL`:默认状态,在这个状态不会申请任何资源,当element的引用计数变为0时必须处于NULL状态。其他状态切换到这个状态会释放掉所有已申请的资源。 `GST_STATE_READY`:在这个状态下element会申请相关的全局资源,但不涉及stream数据。简单来说就是NULL->READY仅是打开相关的硬件设备,申请buffer;PLAYING->READY就是把停止读取stream数据。 `GST_STATE_PAUSE`:这个状态实际是GStreamer中最常见的一个状态,在这个阶段pipeline打开了stream但并未处理它,例如sink element已经读取到了视频文件的第一帧并准备播放。 `GST_STATE_PLAYING`:PLAYING状态和PAUSE状态实际并没有区别,只是PLAYING允许clock 润run。 通常只需要设置bin或者pipeline元素的状态即可自动完成其内含的所有elements的状态切换,但假如动态的向一条处于PLAYING状态的pipeline添加element,则需要手动完成这个element的状态切换。 ## Bin GstBin可以将一系列elements组合形成一个逻辑上的element,以便从整体上操控和管理elements。 - 最外层的bin即使pipeline。 - GstBin管理它内部elements的状态。 ## Bus GstBus是将stream线程消息转发给应用程序线程的系统。 - GstBus本身运行在应用程序的上下文中,但能够自动监听GStreamer内的线程。 - 每条pipeline都自带一条GstBus,开发人员仅需为其设定handler以便在接收到消息是能或者正确的处理。 ```c++ #include static GMainLoop *loop; static gboolean my_bus_callback (GstBus * bus, GstMessage * message, gpointer data) { g_print ("Got %s message\n", GST_MESSAGE_TYPE_NAME (message)); switch (GST_MESSAGE_TYPE (message)) { case GST_MESSAGE_ERROR:{ GError *err; gchar *debug; gst_message_parse_error (message, &err, &debug); g_print ("Error: %s\n", err->message); g_error_free (err); g_free (debug); g_main_loop_quit (loop); break; } case GST_MESSAGE_EOS: /* end-of-stream */ g_main_loop_quit (loop); break; default: /* unhandled message */ break; } /* we want to be notified again the next time there is a message * on the bus, so returning TRUE (FALSE means we want to stop watching * for messages on the bus and our callback should not be called again) */ return TRUE; } gint main (gint argc, gchar * argv[]) { GstElement *pipeline; GstBus *bus; guint bus_watch_id; /* init */ gst_init (&argc, &argv); /* create pipeline, add handler */ pipeline = gst_pipeline_new ("my_pipeline"); /* adds a watch for new message on our pipeline's message bus to * the default GLib main context, which is the main context that our * GLib main loop is attached to below */ bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); bus_watch_id = gst_bus_add_watch (bus, my_bus_callback, NULL); gst_object_unref (bus); /* [...] */ /* create a mainloop that runs/iterates the default GLib main context * (context NULL), in other words: makes the context check if anything * it watches for has happened. When a message has been posted on the * bus, the default main context will automatically call our * my_bus_callback() function to notify us of that message. * The main loop will be run until someone calls g_main_loop_quit() */ loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (loop); /* clean up */ gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); g_source_remove (bus_watch_id); g_main_loop_unref (loop); return 0; } ``` loop线程(主线程)会定期检查它所监听的消息是否发生。 - GstBus的消息监听是异步的,无法处理同步需求。 - `gst_bus_add_watch()`会处理所有类型的GstBus消息,可以在handler中使用switch语句进行细化,也可以用`gst_bus_add_signal_watch(bus)`和`g_signal_connect(bus, "message::eos", G_CALLBACK(cb_message_eos), NULL)`来为特定类型的消息创建一个handler。 ## Pads and Capabilities Pad是一个element与外部交互的接口,数据从一个element的src-pad传递给另一个element的sink-pad。Pad的Capabilities表明element能处理的数据。 ### Dynamic(or sometimes) pads - 某些elements在创建时并不会带有pads,例如demuxer,demuxer在创建时并不带有sink-pad,直到pipeline处于PAUSE状态读取到了stream的足够信息。demuxer将解析读取的stream中的所有基本流(Video,Audio)数据并为它们分别创建一个能处理对应流数据类型的cap的pad。 - `pad-added`:对于具有sometimes pad的element,它将在创建一个新的pad的时候发出一个signal,用户需要对这个信号做一定的处理以便能正常的连接pipeline中的elemnet。 ```C++ static void cb_new_pad(GstElement* element, GstPad* pad, gpointer data); g_signal_connect(demux, "pad-added", G_CALLBACK(cb_new_pad), NULL); ``` ### Request pads request pads是根据请求才创建的pad,广泛应用于muxer,aggregator,tee中,大多数情况下用户不需要处理request pads。 ```c++ gst_element_request_pad_simple(tee, "src_%d"); gst_element_get_compatiable_pad(mux, tolink_pad, NULL); ``` ### Capabilities of a pad Capabilities是用于描述一个pad能够处理或正在处理的数据类型的机制。GStreamer使用GstCaps描述pads的capabilities,一个GstCaps将含有一个或多个GStructure来描述媒体类型,但对于已经完成negotiation的pad,其GstCaps的GStructure是唯一的,并且属性值是固定的。 ### What capabilities are used for - Autoplugging:基于pad的的caps自动查找能与其link的element; - Compatibility detection:为pad negotiation提供支持; - Metadata:读取pad的caps,可以或缺当前正在播放的流 的信息; - Filtering:用于限制两个pad之间支持的流类型,常被置于convert elements之后,用于指定上一个与其连接的elements的输出数据格式。 ### Using capabilities for metadata ```c++ const GStructure* structure; structure = gst_caps_get_structure(caps, 0); gst_structure_get_int(structure, "width", &width); ``` ### Creating capabilities of filtering ```c++ GstCaps* caps; gboolean link_ok; caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "I420", "width", G_TYPE_INT, 384, "height", G_TYPE_INT, 288, "framerate", GST_TYPE_FRACTION, 25, 1, NULL); link_ok = gst_element_link_filtered (element1, element2, caps); gst_caps_unref (caps); // or GstElement* capfilter; capfilter = gst_element_factory_make("capfilter", "capfilter0"); g_object_set(G_OBJECT(capfilter), "caps", caps, NULL); gst_bin_add_many(GST_BIN(pipeline), capfilter, NULL); gst_element_link_many(elem1, capfilter, elem2, NULL); ``` `gst_element_link_filtered()`内部会自动根据caps创建一个capfilter element并将其插在两个待链接的元素之间。 ================================================ FILE: basic_theory/app_dev_manual/interfaces.md ================================================ # Interfaces 在应用程序中,将element定义为GObject对象便可沿用GObject中设置对象属性的方式,这为应用程序与element的交互带来便利。当然,此种设置对象属性的方式仅包含面向对象编程中常见的gette与setter,无法支持更复杂的交互需求。对于复杂交互需求,GStreamer使用一套基于GObject GTypeInterface类型的接口。 下文意图以简介形式带领读者了解各类接口,因此不包含源码。请读者在有需要了解更多细节时参考API指南。 ## The URI Handler interface 在我们目前展示的示例中,源节点仅出现过读取本地文件的‘filesrc’,但实际上GStreamer支持多种类型的数据源。 GStreamer不要求应用程序掌握任何关于URI的使用细节,例如对于某种特定协议的网络源需使用哪类element。这些细节都已通过GstURIHandler接口进行抽象。 对于URI的命名没有什么严格的规则,一般而言,使用常见的命名规则即可,例如以下这些。 ```c++ file://// http://// rtsp:/// dvb:// ... ``` 应用程序可使用gst_element_make_from_uri()获取支持特定URI的源或接收器element,并根据需要将GST_URI_SRC或GST_URI_SINK用作GstURIType。 另外,可以使用Glib的g_filename_to_uri()和g_uri_to_filename()在文件名与URI之间进行转换。 ## The Color Balance interface GstColorBalance接口用于控制element中与视频相关的属性,如亮度、对比度等。它之所以存在,是因为据它作者所述,没有办法使用GObject动态注册这些视频相关属性。 xvimagesink,glimagesink和Video4linux2等若干element已实现GstColorBalance接口。 ## The Video Overlay interface GstVideoOverlay接口用于在应用程序窗口中嵌入视频流。应用程序可为实现该接口的element提供一个窗口句柄,然后该element利用此窗口句柄进行绘制,而不是另建一个新的顶层窗口。这种方式很利于在视频播放器中嵌入视频。 Video4linux2,glimagesink,ximagesink,xvimagesink和sdlvideosink等若干element已实现GstVideoOverlay接口。 ## Other interfaces GStreamer还提供了相当多的其他接口,并已由一些element实现。这些接口如下。 - GstChildProxy 访问multi-child element内部元素的属性 - GstNavigation 发送和解析navigation事件 - GstPreset 处理element预设 - GstRTSPExtension RTSP扩展接口 - GstStreamVolume 访问和控制流音量大小 - GstTagSetter 处理媒体元数据 - GstTagXmpWriter 进行XMP序列化 - GstTocSetter 设置和检索类TOC数据 - GstTuner 射频调谐操作 - GstVideoDirection 视频旋转和翻转 - GstVideoOrientation 控制视频方向 - GstWaylandVideo Wayland视频接口 ================================================ FILE: basic_theory/app_dev_manual/metadata.md ================================================ # Metadata GStreamer对其支持的2种元数据有着清晰的分类。其一,Stream-tag,这类元数据以非技术的方式描述数据流中的内容。其二,Stream-info,它则以技术方式准确描述数据流中的各项属性。 例如,当数据流为一首MV歌曲时,Stream-tag的内容可以为歌曲作者,歌曲名称和唱片信息,Stream-info的内容则为视频大小,音频采样频率,编码方式等。  GStreamer使用基于bus的标签系统处理Stream-tag类型的元数据。而Stream-info元数据则是通过Pad之间协商Capabilities时来传递。 ## Metadata reading 如前所述,获取Stream-info元数据最便利的方式是从一个GstPad实例中读取。注意,这种方式需要应用程序对所有需要读取的Pad有访问权限。由于在《Using capabilities for metadata》一文中已对此方式展开讲解,这里不再重复。  Stream-tag元数据的读取使用的是GStreamer中的bus。应用程序监听GST_MESSAGE_TAG类型的消息并从中处理即可。基本操作也已涵盖在《Bus》一节中。  但是,需要特别注意的是,GST_MESSAGE_TAG消息可能会被产生多次,因此应用程序需要负责以连贯的方式对它们进行合并和显示。这可以通过gst_tag_list_merge()函数完成。不过在某些实际场景中,请确保信息及时更新,比如在新加载一首歌曲时或持续几分钟接收互联网广播后,应用都需要清空缓存空间。并且,如需令后出现的Stream-tag元数据能成功替换已有元数据,确保在合并时使用GST_TAG_MERGE_PREPEND模式。  下面的示例演示如何从文件中提取标签并将标签打印至控制台。 ```c /* compile with: * gcc -o tags tags.c `pkg-config --cflags --libs gstreamer-1.0` */ #include static void print_one_tag (const GstTagList * list, const gchar * tag, gpointer user_data) { int i, num; num = gst_tag_list_get_tag_size (list, tag); for (i = 0; i < num; ++i) { const GValue *val; /* Note: when looking for specific tags, use the gst_tag_list_get_xyz() API, * we only use the GValue approach here because it is more generic */ val = gst_tag_list_get_value_index (list, tag, i); if (G_VALUE_HOLDS_STRING (val)) { g_print ("\t%20s : %s\n", tag, g_value_get_string (val)); } else if (G_VALUE_HOLDS_UINT (val)) { g_print ("\t%20s : %u\n", tag, g_value_get_uint (val)); } else if (G_VALUE_HOLDS_DOUBLE (val)) { g_print ("\t%20s : %g\n", tag, g_value_get_double (val)); } else if (G_VALUE_HOLDS_BOOLEAN (val)) { g_print ("\t%20s : %s\n", tag, (g_value_get_boolean (val)) ? "true" : "false"); } else if (GST_VALUE_HOLDS_BUFFER (val)) { GstBuffer *buf = gst_value_get_buffer (val); guint buffer_size = gst_buffer_get_size (buf); g_print ("\t%20s : buffer of size %u\n", tag, buffer_size); } else if (GST_VALUE_HOLDS_DATE_TIME (val)) { GstDateTime *dt = g_value_get_boxed (val); gchar *dt_str = gst_date_time_to_iso8601_string (dt); g_print ("\t%20s : %s\n", tag, dt_str); g_free (dt_str); } else { g_print ("\t%20s : tag of type '%s'\n", tag, G_VALUE_TYPE_NAME (val)); } } } static void on_new_pad (GstElement * dec, GstPad * pad, GstElement * fakesink) { GstPad *sinkpad; sinkpad = gst_element_get_static_pad (fakesink, "sink"); if (!gst_pad_is_linked (sinkpad)) { if (gst_pad_link (pad, sinkpad) != GST_PAD_LINK_OK) g_error ("Failed to link pads!"); } gst_object_unref (sinkpad); } int main (int argc, char ** argv) { GstElement *pipe, *dec, *sink; GstMessage *msg; gchar *uri; gst_init (&argc, &argv); if (argc < 2) g_error ("Usage: %s FILE or URI", argv[0]); if (gst_uri_is_valid (argv[1])) { uri = g_strdup (argv[1]); } else { uri = gst_filename_to_uri (argv[1], NULL); } pipe = gst_pipeline_new ("pipeline"); dec = gst_element_factory_make ("uridecodebin", NULL); g_object_set (dec, "uri", uri, NULL); gst_bin_add (GST_BIN (pipe), dec); sink = gst_element_factory_make ("fakesink", NULL); gst_bin_add (GST_BIN (pipe), sink); g_signal_connect (dec, "pad-added", G_CALLBACK (on_new_pad), sink); gst_element_set_state (pipe, GST_STATE_PAUSED); while (TRUE) { GstTagList *tags = NULL; msg = gst_bus_timed_pop_filtered (GST_ELEMENT_BUS (pipe), GST_CLOCK_TIME_NONE, GST_MESSAGE_ASYNC_DONE | GST_MESSAGE_TAG | GST_MESSAGE_ERROR); if (GST_MESSAGE_TYPE (msg) != GST_MESSAGE_TAG) /* error or async_done */ break; gst_message_parse_tag (msg, &tags); g_print ("Got tags from element %s:\n", GST_OBJECT_NAME (msg->src)); gst_tag_list_foreach (tags, print_one_tag, NULL); g_print ("\n"); gst_tag_list_unref (tags); gst_message_unref (msg); } if (GST_MESSAGE_TYPE (msg) == GST_MESSAGE_ERROR) { GError *err = NULL; gst_message_parse_error (msg, &err, NULL); g_printerr ("Got error: %s\n", err->message); g_error_free (err); } gst_message_unref (msg); gst_element_set_state (pipe, GST_STATE_NULL); gst_object_unref (pipe); g_free (uri); return 0; } ``` ## Tag writing 应用程序通过GstTagSetter接口写入Stream-tag元数据。所有支持tag-set的element都可以完成 Stream-tag元数据的写入。  为了找到流水线中支持tag-set的element,应用可以通过gst_bin_iterate_all_by_interface (pipeline, GST_TYPE_TAG_SETTER)函数进行检查。对于最终产生Stream-tag元数据的组件,这样的element常见的有编码器和复用器,应用可以调用gst_tag_setter_merge_tags ()函数将一个Stream-tag元数据列表设置为该element的元数据,或使用gst_tag_setter_add_tags ()函数来为element设置一系列独立的Stream-tag。  还有一个有用的Stream-tag特性是流水线中所有Stream-tag都会被保存。这意味着当应用程序在转码一个含有Stream-tag的文件为另一种支持Stream-tag的媒体格式时,原始Stream-tag会被认为是数据流的一部分,从而被自然合并到新媒体格式的文件中。 ================================================ FILE: basic_theory/app_dev_manual/threads.md ================================================ # Threads GStreamer的设计原生支持多线程,并完全保证线程安全。大多数情况下,多线程实现细节对基于GStreamer开发的应用程序隐藏,因为这会让应用程序开发更便利。而在某些场景下,应用程序可能会介入Gstreamer的多线程机制。此时,Gstreamer允许应用程序指定在流水线内的某些部分使用多线程。具体请参考《When would you want to force a thread?》一节。 Gstreamer还支持开发人员在线程被创建时获取通知。从而,开发人员可以配置线程的优先级,设置线程池的相关行为等。具体请参考《Configuring Threads in GStreamer》一节。 ## Scheduling in GStreamer GStreamer中的每一个element可以决定自己的数据调度方式,即element内pad的数据调度是使用push模式还是pull模式。举例来说,一个element可以选择开启一个线程从自己的sink-pad中拉取数据,或者开启一个线程向自己的source-pad中推送数据。并且,element还支持数据处理过程中的上下游线程各自设置为push或pull模式。换句话说,GStreamer不会对element选择哪种数据调度方式做出限制。更多具体信息请参考插件编写指南。 不管哪种调度方式,流水线中一定会存在某些element开启线程处理数据,我们称这些线程为“streaming threads”。在代码中,“streaming threads”往往是一个GstTask实例,它们统一从一个GstTaskPool中被创建。在下一节,我们会学习如果从GstTask,GstTaskPool获取消息并进行配置。 ## Configuring Threads in GStreamer “streaming threads”的运行状态会经由发布在GstBus中的STREAM_STATUS消息通知应用程序。根据运行状态不同,消息被分为以下多种类型: - 当一个新的线程将要被创建时,会有一条类型为GST_STREAM_STATUS_TYPE_CREATE的STREAM_STATUS消息被发出。只有在接收到这条消息后,应用程序才能为GstTask配置GstTaskPool。此时配置的GstTaskPool可由应用程序定制,定制后的任务池会提供线程来实现“streaming threads”的具体需求。 需要注意的是,当应用程序在定制化GstTaskPool时,必须以同步方式处理STREAM_STATUS消息;当应用程序不定制GstTaskPool时,处理STREAM_STATUS消息的函数返回后,GstTask会使用默认的GstTaskPool。 - 当一个线程进入或离开时,应用程序可以配置这个线程的优先级。当线程被销毁时,应用程序也会得到STREAM_STATUS消息通知。 - 当线程开启、暂停和终止时,应用程序也会得到STREAM_STATUS消息通知。在GUI应用中。这些消息可以被用来可视化“streaming threads”的运行状态。 下一小节中会介绍一个具体的示例。 ### Boost priority of a thread ``` .----------. .----------. | fakesrc | | fakesink | | src->sink | '----------' '----------' ``` 以上图的简单流水线为例。其中的数据调度模式为,fakesrc开启“streaming threads”以生成虚拟数据,同时它还使用push模式将数据推送给fakesink。应用程序若想提升“streaming threads”的优先级则可通过下述方法实现: - 当流水线状态从READY变为PAUSED时,fakesrc生成一条STREAM_STATUS消息,表示其需要一个“streaming threads”来将数据推送给fakesink。 - 应用程序从bus上收到这条消息时,会使用同步方式调用一个bus响应函数处理此消息。在该函数中,应用程序会为消息中传递而来的GstTask实例配置一个定制化的GstTaskPool。这个定制化的任务池会负责创建线程。而我们提升“streaming threads”优先级的需求可以在任务池创建线程时完成。 - 另一种提高“streaming threads”优先级的方法如下。利用这条消息是被bus响应函数同步方式处理的特性,此时响应函数已经在一个线程内,应用程序可以使用ENTER/LEAVE通知来提升当前线程的优先级,甚至是操作系统对当前线程的调度策略。 在上段第一点中,我们需要实现一个配置给GstTask实例的定制化GstTaskPool。下面的代码就是一个定制化GstTaskPool的实现。实现方式使用Gobject的派生方式,从GstTaskPool派生出子类,TestRTPool。TestRTPool类的push方法中,应用程序使用pthread创建了一个SCHED_RR轮转法实时线程。注意,创建实时线程可能要求应用程序获得更多的系统权限。 ```c #include typedef struct { pthread_t thread; } TestRTId; G_DEFINE_TYPE (TestRTPool, test_rt_pool, GST_TYPE_TASK_POOL); static void default_prepare (GstTaskPool * pool, GError ** error) { /* we don't do anything here. We could construct a pool of threads here that * we could reuse later but we don't */ } static void default_cleanup (GstTaskPool * pool) { } static gpointer default_push (GstTaskPool * pool, GstTaskPoolFunction func, gpointer data, GError ** error) { TestRTId *tid; gint res; pthread_attr_t attr; struct sched_param param; tid = g_slice_new0 (TestRTId); pthread_attr_init (&attr); if ((res = pthread_attr_setschedpolicy (&attr, SCHED_RR)) != 0) g_warning ("setschedpolicy: failure: %p", g_strerror (res)); param.sched_priority = 50; if ((res = pthread_attr_setschedparam (&attr, ¶m)) != 0) g_warning ("setschedparam: failure: %p", g_strerror (res)); if ((res = pthread_attr_setinheritsched (&attr, PTHREAD_EXPLICIT_SCHED)) != 0) g_warning ("setinheritsched: failure: %p", g_strerror (res)); res = pthread_create (&tid->thread, &attr, (void *(*)(void *)) func, data); if (res != 0) { g_set_error (error, G_THREAD_ERROR, G_THREAD_ERROR_AGAIN, "Error creating thread: %s", g_strerror (res)); g_slice_free (TestRTId, tid); tid = NULL; } return tid; } static void default_join (GstTaskPool * pool, gpointer id) { TestRTId *tid = (TestRTId *) id; pthread_join (tid->thread, NULL); g_slice_free (TestRTId, tid); } static void test_rt_pool_class_init (TestRTPoolClass * klass) { GstTaskPoolClass *gsttaskpool_class; gsttaskpool_class = (GstTaskPoolClass *) klass; gsttaskpool_class->prepare = default_prepare; gsttaskpool_class->cleanup = default_cleanup; gsttaskpool_class->push = default_push; gsttaskpool_class->join = default_join; } static void test_rt_pool_init (TestRTPool * pool) { } GstTaskPool * test_rt_pool_new (void) { GstTaskPool *pool; pool = g_object_new (TEST_TYPE_RT_POOL, NULL); return pool; } ``` 上述最关键的是default_push函数。它需要启动一个新线程并运行从形参func传入的指定函数。现实中更适当做法可能是在线程池中多准备一些线程,避免频繁的线程创建和销毁所带来的不必要开销。 在下一段代码中,应用程序开始真正为fakesrc配置上述生成的定制化GstTaskPool。如前文所述,我们需要一个同步的bus响应函数来处理STREAM_STATUS消息,从而在获得GST_STREAM_STATUS_TYPE_CREATE类型消息时,为GstTask配置定制化GstTaskPool。 ```c static GMainLoop* loop; static void on_stream_status (GstBus *bus, GstMessage *message, gpointer user_data) { GstStreamStatusType type; GstElement *owner; const GValue *val; GstTask *task = NULL; gst_message_parse_stream_status (message, &type, &owner); val = gst_message_get_stream_status_object (message); /* see if we know how to deal with this object */ if (G_VALUE_TYPE (val) == GST_TYPE_TASK) { task = g_value_get_object (val); } switch (type) { case GST_STREAM_STATUS_TYPE_CREATE: if (task) { GstTaskPool *pool; pool = test_rt_pool_new(); gst_task_set_pool (task, pool); } break; default: break; } } static void on_error (GstBus *bus, GstMessage *message, gpointer user_data) { g_message ("received ERROR"); g_main_loop_quit (loop); } static void on_eos (GstBus *bus, GstMessage *message, gpointer user_data) { g_main_loop_quit (loop); } int main (int argc, char *argv[]) { GstElement *bin, *fakesrc, *fakesink; GstBus *bus; GstStateChangeReturn ret; gst_init (&argc, &argv); /* create a new bin to hold the elements */ bin = gst_pipeline_new ("pipeline"); g_assert (bin); /* create a source */ fakesrc = gst_element_factory_make ("fakesrc", "fakesrc"); g_assert (fakesrc); g_object_set (fakesrc, "num-buffers", 50, NULL); /* and a sink */ fakesink = gst_element_factory_make ("fakesink", "fakesink"); g_assert (fakesink); /* add objects to the main pipeline */ gst_bin_add_many (GST_BIN (bin), fakesrc, fakesink, NULL); /* link the elements */ gst_element_link (fakesrc, fakesink); loop = g_main_loop_new (NULL, FALSE); /* get the bus, we need to install a sync handler */ bus = gst_pipeline_get_bus (GST_PIPELINE (bin)); gst_bus_enable_sync_message_emission (bus); gst_bus_add_signal_watch (bus); g_signal_connect (bus, "sync-message::stream-status", (GCallback) on_stream_status, NULL); g_signal_connect (bus, "message::error", (GCallback) on_error, NULL); g_signal_connect (bus, "message::eos", (GCallback) on_eos, NULL); /* start playing */ ret = gst_element_set_state (bin, GST_STATE_PLAYING); if (ret != GST_STATE_CHANGE_SUCCESS) { g_message ("failed to change state"); return -1; } /* Run event loop listening for bus messages until EOS or ERROR */ g_main_loop_run (loop); /* stop the bin */ gst_element_set_state (bin, GST_STATE_NULL); gst_object_unref (bus); g_main_loop_unref (loop); return 0; } ``` 注意,上述代码很可能需要应用程序获得root权限。当不能创建线程时,gst_element_set_state将会运行失败,失败的返回值会被应用程序所捕获。 当流水线中存在多个线程时,应用程序会同时收到多条STREAM_STATUS消息。可以通过消息所有者(所有者往往是启动这个消息所对应线程的pad或element)来区分这些消息,从而明确每个消息所对应线程运行的是整个应用程序上下文中的哪个函数。 ## When would you want to force a thread? 我们在上文中已经看到element自身如何创建线程。除此以外,还可以通过向流水线中添加特定类型element的方式强制流水线使用独立线程完成部分任务。 从计算性能角度考虑,应用程序不应该为流水线中每一个element都强制使用独立线程,因为这会导致不必要的计算开销。反之,推荐在流水线中强制使用独立线程的场景有: - 数据缓存。当应用程序在处理互联网流式数据时或者从声卡、显卡里记录实时数据流时,数据缓存就十分必要。并且数据缓存可以降低流水线中某个环节突然卡顿导致数据丢失而产生的影响。具体示例可以参见《Stream buffering》一文,那里举例了queue2组件在缓存互联网数据时的作用。 ![Thread Buffering](images/thread-buffering.png) - 同步输出设备。当流水线中的数据同时包含视频和音频时,并且两者都作为流水线的独立输出。通过为它们各自添加独立线程,就能做到音视频相互独立解析,并且提供更好的音视频同步方式。 ![Thread Synchronizing](images/thread-synchronizing.png) 阅读至此,您可能已经发现在流水线中强制使用独立线程的特殊类型element就是“queue”。queue作为线程隔离组件提供流水线中强制使用新线程的能力。它的实现是基于众所周知的生产者/消费者模式,除了能提供跨线程安全的数据流通功能,它本身还可以用作缓存空间。queue包含若干个基于GObject实现的属性,可以调节这些属性实现特定业务。例如,可以实现数据流量的上下界控制。queue的下界属性默认情况下不启用,但当应用程序指定了数据流量下界时,若没有足够的数据量通过queue传递,则queue不会输出任何数据。上界属性则代表,如果传输中的数据量超过上界阈值,则queue会根据其他属性的配置,或阻挡更多数据的流入,或开启不同的数据丢弃功能。 在流水线中使用queue的方式也很简单,仅需在构建流水线时为必要的位置添加queue即可。其他关于线程的细节会由GStreamer在内部完成。 ================================================ FILE: basic_theory/basic_tutorial/dynamic_pipelines.md ================================================ # Basic tutorial 3: Dynamic pipelines ## 目标 这篇教程展示了使用GStreamer需要的剩余基本概念,允许你随着数据流动来构建pipeline, 而不是在应用程序的一开始就定义一个完整的管道。 学习完这篇教程,你讲具备开始Playback tutorial的必要知识: - 如何在连接elements的时候实现更好的控制。 - 如何获取感兴趣的事件通知以便及时做出处理。 - GStreamer States。 ## 介绍 可以看到这篇教程的pipeline在设置为PLAYING状态之前都没有完成构建,这种行为是允许的。但是假如在播放之前没有完成,那么数据在到达pipeline的某个节点将上报一个错误信息并停止运行。 在这个例子中我们将打开一个多路复用的文件,音频和视频被存储在同一个容器文件中。负责响应打开多路复用文件的element被叫做解复用器,可以处理MKV、QT、MOV、Ogg、WMV等格式的容器文件。 GStreamer elements相互通信的端口称为pad,存在sink pad(数据通过它进入元素)和src pad(数据通过它退出元素),根据定义很明显可以知道source element只有src pad,sink element只有sink pad,而filter element两者都有。 ![img](images/dynamic_pipelines/src-element.png) ![img](images/dynamic_pipelines/filter-element.png) ![img](images/dynamic_pipelines/sink-element.png) 解复用器含有一个sink pad,复用数据经过它进入element;含有多个source pads,复用数据中的每路数据各一个。 ![img](images/dynamic_pipelines/filter-element-multi.png) 下图是一个包含一个解复用器和两个分支的简单pipeline,一个分支处理音频数据,一个分支处理视频数据。 **注意此例图这不是本教程的pipeline。** ![img](images/dynamic_pipelines/simple-player.png) 处理解复用器的难点在于,直到收到一些数据以及有机会查看容器文件中的内容之前解复用器无法生成任何信息,即解复用器的source pad是动态生成的,在生成之前其他element无法与它连接,于是pipeline只能在这终止。 当解复用器收到足够多的信息,能够知道容器文件中媒体流的数量和类型时,它将开始创建source pad,这时我们就可以完成pipeline的构建。 **注:为了简单起见,本教程例子只连接音频pad而忽略视频pad。** ## Dynamic Hello World ### basic-tutorial-3.c ```c #include /* Structure to contain all our information, so we can pass it to callbacks */ typedef struct _CustomData { GstElement *pipeline; GstElement *source; GstElement *convert; GstElement *resample; GstElement *sink; } CustomData; /* Handler for the pad-added signal */ static void pad_added_handler (GstElement *src, GstPad *pad, CustomData *data); int main(int argc, char *argv[]) { CustomData data; GstBus *bus; GstMessage *msg; GstStateChangeReturn ret; gboolean terminate = FALSE; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ data.source = gst_element_factory_make ("uridecodebin", "source"); data.convert = gst_element_factory_make ("audioconvert", "convert"); data.resample = gst_element_factory_make ("audioresample", "resample"); data.sink = gst_element_factory_make ("autoaudiosink", "sink"); /* Create the empty pipeline */ data.pipeline = gst_pipeline_new ("test-pipeline"); if (!data.pipeline || !data.source || !data.convert || !data.resample || !data.sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Build the pipeline. Note that we are NOT linking the source at this * point. We will do it later. */ gst_bin_add_many (GST_BIN (data.pipeline), data.source, data.convert, data.resample, data.sink, NULL); if (!gst_element_link_many (data.convert, data.resample, data.sink, NULL)) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (data.pipeline); return -1; } /* Set the URI to play */ g_object_set (data.source, "uri", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); /* Connect to the pad-added signal */ g_signal_connect (data.source, "pad-added", G_CALLBACK (pad_added_handler), &data); /* Start playing */ ret = gst_element_set_state (data.pipeline, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (data.pipeline); return -1; } /* Listen to the bus */ bus = gst_element_get_bus (data.pipeline); do { msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_STATE_CHANGED | GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Parse message */ if (msg != NULL) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); terminate = TRUE; break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); terminate = TRUE; break; case GST_MESSAGE_STATE_CHANGED: /* We are only interested in state-changed messages from the pipeline */ if (GST_MESSAGE_SRC (msg) == GST_OBJECT (data.pipeline)) { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); g_print ("Pipeline state changed from %s to %s:\n", gst_element_state_get_name (old_state), gst_element_state_get_name (new_state)); } break; default: /* We should not reach here */ g_printerr ("Unexpected message received.\n"); break; } gst_message_unref (msg); } } while (!terminate); /* Free resources */ gst_object_unref (bus); gst_element_set_state (data.pipeline, GST_STATE_NULL); gst_object_unref (data.pipeline); return 0; } /* This function will be called by the pad-added signal */ static void pad_added_handler (GstElement *src, GstPad *new_pad, CustomData *data) { GstPad *sink_pad = gst_element_get_static_pad (data->convert, "sink"); GstPadLinkReturn ret; GstCaps *new_pad_caps = NULL; GstStructure *new_pad_struct = NULL; const gchar *new_pad_type = NULL; g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src)); /* If our converter is already linked, we have nothing to do here */ if (gst_pad_is_linked (sink_pad)) { g_print ("We are already linked. Ignoring.\n"); goto exit; } /* Check the new pad's type */ new_pad_caps = gst_pad_get_current_caps (new_pad); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (!g_str_has_prefix (new_pad_type, "audio/x-raw")) { g_print ("It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type); goto exit; } /* Attempt the link */ ret = gst_pad_link (new_pad, sink_pad); if (GST_PAD_LINK_FAILED (ret)) { g_print ("Type is '%s' but link failed.\n", new_pad_type); } else { g_print ("Link succeeded (type '%s').\n", new_pad_type); } exit: /* Unreference the new pad's caps, if we got them */ if (new_pad_caps != NULL) gst_caps_unref (new_pad_caps); /* Unreference the sink pad */ gst_object_unref (sink_pad); } ``` ## 工作流 ### CustomData ```c /* Structure to contain all our information, so we can pass it to callbacks */ typedef struct _CustomData { GstElement *pipeline; GstElement *source; GstElement *convert; GstElement *sink; } CustomData; ``` 在此前的教程中,我们以局部变量的形式维护所有我们需要的信息(GstElement指针),但是由于本教程(和大部分应用程序)需要使用回调函数,为了便于处理我们将所有数据组织成一个结构体。 ### Build Pipeline ```c /* Create the elements */ data.source = gst_element_factory_make ("uridecodebin", "source"); data.convert = gst_element_factory_make ("audioconvert", "convert"); data.resample = gst_element_factory_make ("audioresample", "resample"); data.sink = gst_element_factory_make ("autoaudiosink", "sink"); ``` GstElements的创建和连接如前文一样,在本教程中我们创建了一个`uridecodebin`,和[Basic tutorial 1: Hello world!]中的playbin一样,它也是Playback中的bin插件,它在内部实例化所有需要的elements(source, demuxers和decoders)以将URI解码成裸音频流和/或裸音频流。但和playbin不一样的是它并不包含解码之后的播放处理,因此和解复用器一样,它的source pad在初始化阶段是不可用的需要用户手动完成连接。 `audioconvert`是一个非常有用的插件,它能转换不同的音频格式,确保教程中的例子能够在任何平台上运行(各个平台上的音频解码器解码出的格式不一定符合audio sink的要求)。 `audioresample`是一个非常有用的插件,它能够转换不同的音频采样率,同样是为了确保教程中的例子能够在任何平台上运行(aduio sink不一定支持各个平台上的音频解码器解码出的音频采样率)。 `autoaudiosink`和前文中用到的`autovideosink`一样,它将把音频流渲染到声卡上。 ```c if (!gst_element_link_many (data.convert, data.resample, data.sink, NULL)) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (data.pipeline); return -1; } ``` 这里我们将`audioconvert`,`audioresample`和`autoaudiosink`连接起来,但是并没有将它们和source连接,因为这这个阶段,`uridecodebin`还没有生成source pad,这部分工作将放在之后完成。 ```c /* Set the URI to play */ g_object_set (data.source, "uri", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); ``` 设置`uridecodebin`的`uri`属性。 ### Signals ```c /* Connect to the pad-added signal */ g_signal_connect (data.source, "pad-added", G_CALLBACK (pad_added_handler), &data); ``` GSignals是GStreamer的一个重点,它们将在某些事件发生的时候以回调的方式通知你。这些信号以名字属性区分,每个GObject都有它自己的信号。 上述代码中我们使用`gst_signal_connect()`监听`uridecodebin`的`pad-added`信号,并为这个信号提供一个回调函数指针`pad_added_handler`并向回调传递一个用户数据指针`data`。GStreamer不会对用户数据指针进行处理,它直接将它传递给回调函数,因此回调函数可以使用主线程的数据。在本教程中,我们传递了一个`CustomData`结构体变量。 一个GstElement能够出发的signals可以在它的文档看到或者使用`gst-inspect-1.0`查看。 至此pipeline构建完成,我们可以设置PLAYING状态并开始播放以及监听bus中你感兴趣的messages。 ### 回调函数 当source element最终获取到足够的信息从而开始生成数据的时候,它将创建source pads并且出发`pad-added`信号,这时将调用信号连接的回调函数 : ```c static void pad_added_handler (GstElement *src, GstPad *new_pad, CustomData *data) { ``` - `src`是出发信号的element,在本教程中是`uridecodebin`,信号处理程序的第一个参数总是触发它的对象。 - `new_pad`是`src` element刚刚添加的`GstPad`,这正是我们想要连接的pad。 - `data`是我们连接信号时传递的指针,在本教程中是一个`CustomData`指针。 ```c GstPad *sink_pad = gst_element_get_static_pad (data->convert, "sink"); ``` 我们从`CustomData`中获取`autoaudioconvert`element,并使用`gst_element_get_static_pad()`获取它的`sink pad`,要与`new_pad`连接的pad。在之前教程中我们连接elements,由GStreamer自行选择正确的pads,现在我们手动完成这部分连接。 ```c /* If our converter is already linked, we have nothing to do here */ if (gst_pad_is_linked (sink_pad)) { g_print ("We are already linked. Ignoring.\n"); goto exit; } ``` `uridecodebin`将生成尽可能多的pads,这取决于它能够出处理的数据类型,并且没生成一个pad,回调都会被调用一次,上述代码将避免我们尝试将`new_pad`与一个已经连接了的element连接。 ```c /* Check the new pad's type */ new_pad_caps = gst_pad_get_current_caps (new_pad, NULL); new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); new_pad_type = gst_structure_get_name (new_pad_struct); if (!g_str_has_prefix (new_pad_type, "audio/x-raw")) { g_print ("It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type); goto exit; } ``` 现在让我们来检查这个`new_pad`将输出的数据类型,因为本教程只关注音频数据。在这个pad生成之前我们已经创建并连接了一系列处理音频数据所需的elements,现在要做的就是把`new_pad`与这些elements连接起来。 `gst_pad_get_current_caps()`检查当前pad的`capabilities`(当前输出的数据类型),它被包裹在一个GstCaps结构体中。用户可以使用`gst_pad_query_caps()`获取当前pad支持的所有caps。一个pad可以提供许多capabilities,因此GstCaps可以包含许多GstStructure,每一个GstStructure代表一个不同的`capability`。但一个pad当前的caps上只有一个GstStructure代表了单一的媒体格式,或者pad当前没有caps将返回一个`NULL`。 在教程中,我只关注音频数据,在这个pad中我们想要的只有音频`capability`,所以我们使用gst_caps_get_structure()检索pad的第一个GstStructure。 最后我们使用`gst_structure_get_name()`来恢复结构体的name成员,它包含媒体数据格式的主要描述。 假如GstStructure的name不是`audio/x-raw`那么意味着这个pad不是音频解码的数据的pad,目前我们不需要关注非音频数据,所以直接跳过。 ```c /* Attempt the link */ ret = gst_pad_link (new_pad, sink_pad); if (GST_PAD_LINK_FAILED (ret)) { g_print ("Type is '%s' but link failed.\n", new_pad_type); } else { g_print ("Link succeeded (type '%s').\n", new_pad_type); } ``` `gst_pad_link()`尝试连接两个pads,连接顺序必须是source->sink,并且两个pads必须属于同一个bin/pipeline中的elements。 随着pad连接完成,uridecodebin将与其余elements连接起来,程序将得以正确运行直到遇到`ERROR`或是`EOS`。 ## GStreamer States 在前两篇教程中我们都使用`gst_element_set_state()`来修改设置pipeline的状态为`PLAYING`以让pipeline运行,这里介绍其余的GStreamer States: | State | Description | | :-------- | :----------------------------------------------------------- | | `NULL` | the NULL state or initial state of an element. | | `READY` | the element is ready to go to PAUSED. | | `PAUSED` | the element is PAUSED, it is ready to accept and process data. Sink elements however only accept one buffer and then block(Preroll). | | `PLAYING` | the element is PLAYING, the clock is running and the data is flowing. | Pipeline只能在上表中相邻的两个状态之间变换,即无法从`NULL`直接改为`PLAYING`,必须经过`READY`和`PAUSED`两个中间态。但是假如你将pipeline设置为`PLAYING`, GStreamer将自动进行中间转换。 ```c case GST_MESSAGE_STATE_CHANGED: /* We are only interested in state-changed messages from the pipeline */ if (GST_MESSAGE_SRC (msg) == GST_OBJECT (data.pipeline)) { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); g_print ("Pipeline state changed from %s to %s:\n", gst_element_state_get_name (old_state), gst_element_state_get_name (new_state)); } break; ``` 本教程添加了这段代码,用于监听有关状态更改的总线消息并将它们打印到屏幕上,以帮助理解状态转换。每个element都将关于其当前状态的消息放在总线上**(设置GST_DEBUG环境变量可以查看所有GStreamer elements的输出LOG)**,因此我们将它们过滤掉,只监听来自pipeline的消息。 大多数应用程序只需要关心`PLAYING`状态能否正常播放,`PAUSED`状态能否正常暂停以及`NULL`状态能否退出和回收资源 ## 练习 对大多数程序员而言,动态连接GstPad一直是一个困难的话题,为了证明你掌握了它的连接,请在本教程的基础上添加一个`autovideosink`(可能在它之前还需要一个`videoconvert`)并将其与解复用器连接起来,以便你的pipeline能够在处理音频数据的同时处理视频数据。 假如你正确实现了,你的程序运行起来应该和[Basic tutorial 1: Hello world!](https://ricardolu.gitbook.io/gstreamer/basic-theory/basic-tutorial1-hello-world)一样能够看到和听到电影。 **注:**这个练习的实现代码可以参考[Build Pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline)教程,在这个教程的`gst_element_factory_make()`章节我构建了一条符合这个联系要求的pipeline并成功运行起来。 ## 总结 在这篇教程中,你学习了: - 如何使用GSignals通知事件 - 如何直接连接GstPads而不是连接它们的父elements - GStreamer element的状态 你结合以上内容构建了一条动态pipeline,这条pipeline不是在程序开始就被指定好的,而是在获取到所有媒体信息的情况下创建的。 这时你可以选择继续学习接下来的基础教程,或者是直接学习Playback教程以获得更多关于playbin element的信息。**个人的建议是跳过Basic tutorial 4/5(因为它们并不常用),可以先完成Basic tutorial 67/8的学习,再开始Playback教程的学习。** ================================================ FILE: basic_theory/basic_tutorial/gstreamer_concepts.md ================================================ # Basic tutorial 2: GStreamer concepts ## 目标 上一篇教程展示了如何自动地都剑一条pipeline。现在我们将手动构建一条pipeline:初始化每一个element并将它们连接起来。在本教程中,将学习: - 什么是GStreamer element以及如何创建它。 - 如何连击两个elements。 - 如何自定义一个element的行为(属性)。 - 如何舰艇bus的错误情形并且从GStreamer messages中提取信息。 ## Manual Hello World ### basic-tutorial-2.c ```c #include int main (int argc, char *argv[]) { GstElement *pipeline, *source, *sink; GstBus *bus; GstMessage *msg; GstStateChangeReturn ret; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ source = gst_element_factory_make ("videotestsrc", "source"); sink = gst_element_factory_make ("autovideosink", "sink"); /* Create the empty pipeline */ pipeline = gst_pipeline_new ("test-pipeline"); if (!pipeline || !source || !sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Build the pipeline */ gst_bin_add_many (GST_BIN (pipeline), source, sink, NULL); if (gst_element_link (source, sink) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (pipeline); return -1; } /* Modify the source's properties */ g_object_set (source, "pattern", 0, NULL); /* Start playing */ ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (pipeline); return -1; } /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Parse message */ if (msg != NULL) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); break; default: /* We should not reach here because we only asked for ERRORs and EOS */ g_printerr ("Unexpected message received.\n"); break; } gst_message_unref (msg); } /* Free resources */ gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); return 0; } ``` 注:编译和运行在上一篇教程已经展示过了,后续不在赘述。 ## 工作流 element是GStreamer的基本构成单位,它们处理从source element(数据生产者)通过filter element流向sink element(数据消费者)的数据。 ![img](images/gstreamer_concepts/figure-1.png) ### 创建element ```c /* Create the elements */ source = gst_element_factory_make ("videotestsrc", "source"); sink = gst_element_factory_make ("autovideosink", "sink"); ``` 上述代码展示了如何使用`gst_element_factory_make()`新建一个element,函数的第一个参数是要创建的element名,即插件名。第二个参数是给这个特殊实例起的名称,在同一条pipeline中,必须是唯一的名称。这个名称可以用于后续检索它,假如你传递`NULL`作为实例名,GStreamer也会自动为它初始化一个唯一的名称。 ### 创建pipeline 在这篇教程中,我们创建了两个元素:`videotestsrc`和`autovideosink`,中间没有filter element,所以整个pipeline看起来就如下图: ![img](images/gstreamer_concepts/basic-concepts-pipeline.png) `videotestsrc`是一个按照制定`pattern`生成测试视频source element(它将生产数据),这个插件在测试和教程中很好用,但通常并不会出现在真实的应用中。 `autovideosink`是一个在窗口中播放它接收到的图像的sink element(它将消费数据),GStreamer包含有很多视频sink element,具体取决于操作系统,它们能够处理不同的图像格式,`autovideosink`将自动选择其中一个并实例话化,所以用户不需要担心平台兼容性问题。 ```c /* Create the empty pipeline */ pipeline = gst_pipeline_new ("test-pipeline"); /* Build the pipeline */ gst_bin_add_many (GST_BIN (pipeline), source, sink, NULL); if (gst_element_link (source, sink) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (pipeline); return -1; } ``` `gst_pipeline_new()`可以创建一个pipeline,GStreamer中的所有元素在使用之前通常必须包含在一条pipeline 中,pipeline将负责一些时钟和消息功能。 一条pipeline也是一个特殊的bin,被用来包含其他elements,因此GstBin的所有方法也能用于GstPipeline。在上述代码中,我们调用`gst_bin_add_many()`来将elements添加到pipeline中(注意对pipeline的`GST_BIN()`映射),这个函数接受一个要被添加到pipeline中的element列表,因为不确定列表长度所以需要以`NULL`结尾。添加单个element可以使用`get_bin_add()`。 虽然elements已经被加入到一条pipeline中,但这时候这些elements仍然处于一个无须的状态,在运行之前,我们需要使用`gst_element_link()`将它们彼此之间**按顺序连接起来**。`gst_element_link()`的第一个参数必须是source elemnt,第二个参数必须是sink element,并且由于只能连接同一条pipeline中的elements,必须在连接之前将所有的elements加入pipeline。 ### 设置element properties GStreamer的实现依赖于GObject,所有的GStreamer elements都是特殊的GObject,用于提供`property`特性。 大多数GStreamer elements都具有被称作**属性**的定制性质,使用`g_obeject_set()`修改`writable propeties`的属性值可以改变element的行为,使用`g_obejct_get()`请求`readable properties`的属性值可以获得element的内部状态。 `g_object_set()`可以接受一个以`NULL`结束的属性名-属性值键值对列表,所以可以一次性设置多个属性。 ```c /* Modify the source's properties */ g_object_set (source, "pattern", 0, NULL); ``` 在本教程中我们将`videotestsrc`的`pattern`属性设置为`0`,`pattern`属性控制者`videotestsec`的输出视频类型,用户可以尝试设置为不同的值并查看效果。 **属性名和可能的属性值可以查看element的说明文档或者使用`gst-inspect-1.0`工具查看。** ### Error监听 ```c /* Start playing */ ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (pipeline); return -1; } ``` 向上篇教程一样,这时已经成功构建和设置完pipeline了,调用`gst_element_set_state()`改变pipeline的状态,但是这次将检查状态改变的返回值,假如状态修改失败,将返回一个error值并进行相关退出处理。 ```c /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Parse message */ if (msg != NULL) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); break; default: /* We should not reach here because we only asked for ERRORs and EOS */ g_printerr ("Unexpected message received.\n"); break; } gst_message_unref (msg); } ``` 在上一篇教程中,我们没有对`gst_bus_timed_pop_filtered()`返回的GstMessage进行处理,在本篇教程中,我们监听pipeline的erroe和EOS信号,并在发生时进行了打印了相应的信息,便于debug。 GstMessage是一个非常通用的结构,它可以传递几乎任何类型的信息。同时,GStreamer为每种消息提供了一系列解析函数。 在本教程里面,我们一旦知道message里面包含一个错误(通过使用GST_MESSAGE_TYPE宏),我们可以使用`gst_message_parse_error()`方法来解析message,这个方法会返回一个GLib的[GError](https://developer.gnome.org/glib/unstable/glib-Error-Reporting.html#GError)结构。 ### GStreamer bus GStreamer bus它是负责将element生成的GstMessages按顺序交付给应用程序和**应用程序线程**(这点很重要,因此GStreamer实际是在其他的线程中处理媒体流)的对象。 messgaes可以使用`get_bus_timed_pop_filtered()`同步提取或者使用信号回调的方式异步提取。应用程序应该始终关注总线,以得到错误和其他回放相关问题的通知。 剩余代码和第一篇教程中的资源回收一样,不再赘述。 ## 练习 尝试在这条example pipeline的source和sink之间添加视频filter element,例如`vertigotv`,你需要创建它,将它加入pipeline,并将它和pipeline中的其他元素连接起来。 取决于你的开发平台和使用的插件,你可能会遇到一个“negotiation”错误,这是因为sink element无法理解视频filter element生产的数据格式,有关 negotiation的更多信息请阅读[Basic tutorial 6: Media formats and Pad Capabilities]()。在这种情况下你需要在filter element之后添加`videoconvert`,有关`videoconvert`的更多信息可以阅读[Basic tutorial 14: Handy elements]()。 **注:**关于手动创建GStreamer Pipeline推荐阅读[Build Pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline#gst_element_factory_make)以获得更详细的介绍。 ## 总结 这篇教程展示了: - 如何使用`gst_element_factory_make()`创建element。 - 如何使用`gst_pipeline_new()`创建一个空的pipeline。 - 如何使用`gst_bin_add_many()`向pipeline中添加element。 - 如何使用`gst_element_link_many()`连接element。 总计有两篇教程介绍GStreamer的基本概念,这是第一篇,下一篇是第二篇。 原文地址:https://gstreamer.freedesktop.org/documentation/tutorials/basic/concepts.html?gi-language=c ================================================ FILE: basic_theory/basic_tutorial/hello_world.md ================================================ # Basic tutorial1: Hello world! ## 目标 大部分人对大多数编程教程的第一印象都是在屏幕上输出”Hello World“,但是GStreamer作为一个多媒体框架,将以播放一个视频作为第一个教程。 ## Hello world ### basic-tutorial-1.c ```c #include int main (int argc, char *argv[]) { GstElement *pipeline; GstBus *bus; GstMessage *msg; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Build the pipeline */ pipeline = gst_parse_launch ("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); /* Start playing */ gst_element_set_state (pipeline, GST_STATE_PLAYING); /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Free resources */ if (msg != NULL) gst_message_unref (msg); gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); return 0; } ``` 上述代码将打开一个窗口并显示一个带有音频的电影。由于这段媒体是从网络上获取的,所以可能需要等待几秒才会显示窗口,等待时间取决于网络连接的速度。此外,没有延迟管理(缓冲),所以当网速比较慢的时候,电影可能会在几秒钟后停止。具体可以参考[Basic tutorial 12: Streaming](https://gstreamer.freedesktop.org/documentation/tutorials/basic/streaming.html)以获得解决方案。 ### 编译 GStreamer目前支持大部分主流的开发平台,包括x86/arm64架构的Linux/Mac OS X/Windows,但是考虑到编译运行的便利性,更建议以Linux作为学习平台,官方的[安装文档](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c)展示了如何在各个Linux发行版本安装GStreamer,并且在使用Gcc编译时添加以下选项以链接GStreamer库文件: ```shell pkg-config --cflags --libs gstreamer-1.0 # 完整的编译指令 gcc basic-tutorial-1.c -o basic-tutorial-1 `pkg-config --cflags --libs gstreamer-1.0` # 运行 ./basic-tutorial-1 ``` Linux平台下的大部分开发软件包使用PackageConfig进行的管理,使用命令行的方式无法适应大型项目的需求,通常情况下我会使用CMake进行项目的构建,在CMake中假如以下指令即可完成GStreamer相关库的链接工作: ```cmake cmake_minimum_required(VERSION 3.10) project(basic-tutorial-1) set(CMAKE_CXX_STANDARD 11) include(FindPkgConfig) # equals `pkg-config --cflags --libs gstreamer-1.0` pkg_check_modules(GST REQUIRED gstreamer-1.0) pkg_check_modules(GSTAPP REQUIRED gstreamer-app-1.0) include_directories( ${GST_INCLUDE_DIRS} ${GSTAPP_INCLUDE_DIRS} ) link_directories( ${GST_LIBRARY_DIRS} ${GSTAPP_LIBRARY_DIRS} ) add_executable(${PROJECT_NAME} basic-tutorial-1.c ) target_link_libraries(${PROJECT_NAME} ${GST_LIBRARIES} ${GSTAPP_LIBRARIES} ) ``` 在这里就不介绍CMake的过多细节,关于CMake的链接你可以浏览[CMake 查找已有库](https://app.gitbook.com/@ricardolu/s/trantor/cmake-in-action/cmake-tutorial/cha-zhao-yi-you-ku)获得更多相关信息。 ```shell cmake -H. -Bbuild/ cd build make ./basic-tutorial-1 ``` ## 工作流 ### 初始化 ```c /* Initialize GStreamer */ gst_init (&argc, &argv); ``` GStreamer程序的第一行代码必须是`gst_init (&argc, &argv)`,它完成了以下工作: - 初始化所有内部结构体 - 检查平台所有可用的插件 - 执行任何用于GStreamer的命令行选项 假设在程序中没有这行代码,编译能够通过,但在运行时将抛出异常,不一定是找不到GStreamer的结构体,就我的经验而言会是无法加载你所用的插件。传递的参数可以是两个`NULL`,但更建议传递命令行参数。 ### 构建Pipeline ```c pipeline = gst_parse_launch ("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); ``` 这行代码是这篇教程的核心,它构建了一个只有`playbin`插件的pipeline。 GStreamer是一个为处理多媒体流而设计的框架,媒体从source元素(生产者),流动到sink元素(消费者),经过一系列执行各种任务的中间element。这些element集合被称为“pipeline”。 通常你可以通过手动组装各个独立的element来构建pipeline,但假如pipeline足够简单,你可以使用GStreamer的高级特性:`gst_parse_launch()`。 这个函数将解析一条文本形式表示的pipeline并将其转化成一条真实的pipeline,使用起来十分便利。 GStreamer提供了两种构建pipeline的方式,可以浏览[Build Pipeline](https://ricardolu.gitbook.io/gstreamer/application-development/build-pipeline)以获得更多信息。 ### Playbin [playbin]()是一个特殊的element,它具备一个pipeline的所有特点,既有source,又有sink。它在内部自动创建和连接所有播放媒体需要的element,用户不需要在意这些细节。 它不具备手动管道所具有的控制粒度,但是它仍然允许足够的定制以满足广泛的应用程序。包括本教程,传递了一个http源,用户可以尝试传递file源或是rtsp源,playbin将为不同的源显示地初始化合适的source element。 ```c /* Start playing */ gst_element_set_state (pipeline, GST_STATE_PLAYING); ``` 当一切准备就绪,需要将pipeline的状态设置为`PLAYING`以开始播放。 ### 监听Pipeline ```c /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); ``` 上述代码监听整个pipeline总线,`gst_element_get_bus ()`检索pipeline的bus,`gst_bus_timed_pop_filtered()`阻塞程序直到程序运行引发ERROR或者到达EOS。 至此,GStreamer将接管整个程序,假如你的URI有错误或者文件不存在,或者缺少plug-in,GStreamer提供了几种通知机制,但在这个教程中一旦发生错误程序就将退出运行。 ### 资源回收 ```c /* Free resources */ if (msg != NULL) gst_message_unref (msg); gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); ``` 在程序终止之前,需要回收所有手动申请的资源,建议用户总是阅读所用的函数文档以确认是否需要释放所使用的各类资源。 `gst_bus_timed_pop_filtered()`返回了一个需要使用`gst_message_unref()`释放的GstMessage,`get_element_get_bus()`将为pipeline的bus添加引用计数,需要使用`get_object_unref()`释放。在释放pipeline及其下的所有element之前需要将pipeline的状态设置为`NULL`. ## 总结 本教程展示了一下内容: - 如何使用CMake构建GStreamer程序。 - 如何使用`gst_init()`初始化GStreamer。 - 如何使用`gst_parse_launch()`快速构建一条pipeline。 - 如何使用playbin创建一条自动的播放pipeline。 - 如何使用`gst_element_set_state()`向GStreamer发送开始播放信号。 - 如何使用`gst_element_get_bus()`和`get_bus_timed_pop_filtered()`监听pipeline并进行相关处理。 原文地址:https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=c#walkthrough ================================================ FILE: basic_theory/basic_tutorial/media_format.md ================================================ # Basic tutorial 6: Media formats and Pad Capabilities ## 目标 Pad的Capabilities时GStreamer的一个基本元素,由于大部分时间都由框架自动处理它们,所以用户很少感觉到它们的存在。这篇略微理论化的教程将展示: - 什么是Pad Capabilities。 - 如何检索它们。 - 什么时候检索它们。 - 为什么用户需要了解他们。 ## 介绍 ### Pads 如之前介绍的一般,Pads允许信息进出elements。Pad的Capabilities(简称为Caps)指定了Pad能够传递什么类型的信息。例如,“320x200分辨率,30FPS的RGB视频”,或是“16位音频样本,5.1通道,采样率44100Hz”,或者是mp3和h264这类的压缩格式。 Pads可以支持多种Capabilities(例如一个video sink可以支持不同格式的RGB/YUV视频)并且Capabilities的值可以是一个范围(例如一个audio sink能够支持从1hz到48000hz的采样率)。然而,真正在Pad之间传递的信息必须只有一种明确制定的类型。 通过一个称为“协商”的过程,两个连接的pad就一个公共类型达成一致,从而pad的capability固定下来(它们只有一种类型,不包含范围)。下面的例程讲向你清楚的展现这个协商的过程。 两个elements支持的Capabilities类型必须有交集它们才能连接,否则它们无法理解彼此传递的数据,这就是Capabilities的主要设计目的。 作为一个应用程序开发者,你经常需要通过连接elements来构建piepline,因此你需要了解你所使用的elements的Pad Caps,或者至少当GStreamer elements因为“协商”错误而连接失败能够知道它们具体是什么。 ### Pad templates Pads从Pad Templates生成,Pad Templates指明了一个Pad所有可能的Capabilities。模版对于创建一个相似的Caps时很有用的,并且允许提前拒绝elements之间的连接:假如两个elements的Pad模版的Capabilities没有交集,就没有必要进行更深入的“协商”。 Pad模版可以视作“协商”过程的第一步,随着过程的发展,实际的Pads被实例化并且Pads的Capabilities也不断被完善固定下来(或者“协商“失败)。 ### Capabilities examples ```shell SINK template: 'sink' Availability: Always Capabilities: audio/x-raw format: S16LE rate: [ 1, 2147483647 ] channels: [ 1, 2 ] audio/x-raw format: U8 rate: [ 1, 2147483647 ] channels: [ 1, 2 ] ``` 这是一个element的永久`sink pad`(暂时不讨论`Availablility`)。它支持2种媒体格式,都是音频的原始数据`audio/x-raw-int`,16位的小端序符号数和8位的无符号数。方括号表示一个范围,例如,频道`channels`的范围是1到2. ```shell SRC template: 'src' Availability: Always Capabilities: video/x-raw width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] format: { I420, NV12, NV21, YV12, YUY2, Y42B, Y444, YUV9, YVU9, Y41B, Y800, Y8, GREY, Y16 , UYVY, YVYU, IYU1, v308, AYUV, A420 } ``` `video/x-raw`表示这个`source pad`输出原始格式的视频。它支持一个很广的维数和帧率,一系列的YUV格式(用花括号列出了)。所有这些格式都显示不同的图像编码格式和子采样程度。 ### 注解 用户可以使用`gst-inspect-1.0`工具学习所有GStreamer element是Caps信息。 注意有些elements需要查询底层硬件以获得支持的格式,并相应地提供它们的Pad Caps(通常在element的READY状态或者更早)。因此同一个element在不同平台上支持的Caps有可能不同,甚至某两次运行之间就会有所不同(虽然这种情况很少见)。 这篇教程实例化了两个elements(通过GstElementFactory的方式),展示了他们的Pad模版,连接它们并将pipeline设置为播放状态。在每个状态变化的阶段,展示了sink element的Pad的Capabilities,你能够观察到在整个“协商”过程中Pad Caps固定之前的所有变化。 ## A trivial Pad Capabilities Example ### basic-tutorial-6.c ```c #include /* Functions below print the Capabilities in a human-friendly format */ static gboolean print_field (GQuark field, const GValue * value, gpointer pfx) { gchar *str = gst_value_serialize (value); g_print ("%s %15s: %s\n", (gchar *) pfx, g_quark_to_string (field), str); g_free (str); return TRUE; } static void print_caps (const GstCaps * caps, const gchar * pfx) { guint i; g_return_if_fail (caps != NULL); if (gst_caps_is_any (caps)) { g_print ("%sANY\n", pfx); return; } if (gst_caps_is_empty (caps)) { g_print ("%sEMPTY\n", pfx); return; } for (i = 0; i < gst_caps_get_size (caps); i++) { GstStructure *structure = gst_caps_get_structure (caps, i); g_print ("%s%s\n", pfx, gst_structure_get_name (structure)); gst_structure_foreach (structure, print_field, (gpointer) pfx); } } /* Prints information about a Pad Template, including its Capabilities */ static void print_pad_templates_information (GstElementFactory * factory) { const GList *pads; GstStaticPadTemplate *padtemplate; g_print ("Pad Templates for %s:\n", gst_element_factory_get_longname (factory)); if (!gst_element_factory_get_num_pad_templates (factory)) { g_print (" none\n"); return; } pads = gst_element_factory_get_static_pad_templates (factory); while (pads) { padtemplate = pads->data; pads = g_list_next (pads); if (padtemplate->direction == GST_PAD_SRC) g_print (" SRC template: '%s'\n", padtemplate->name_template); else if (padtemplate->direction == GST_PAD_SINK) g_print (" SINK template: '%s'\n", padtemplate->name_template); else g_print (" UNKNOWN!!! template: '%s'\n", padtemplate->name_template); if (padtemplate->presence == GST_PAD_ALWAYS) g_print (" Availability: Always\n"); else if (padtemplate->presence == GST_PAD_SOMETIMES) g_print (" Availability: Sometimes\n"); else if (padtemplate->presence == GST_PAD_REQUEST) g_print (" Availability: On request\n"); else g_print (" Availability: UNKNOWN!!!\n"); if (padtemplate->static_caps.string) { GstCaps *caps; g_print (" Capabilities:\n"); caps = gst_static_caps_get (&padtemplate->static_caps); print_caps (caps, " "); gst_caps_unref (caps); } g_print ("\n"); } } /* Shows the CURRENT capabilities of the requested pad in the given element */ static void print_pad_capabilities (GstElement *element, gchar *pad_name) { GstPad *pad = NULL; GstCaps *caps = NULL; /* Retrieve pad */ pad = gst_element_get_static_pad (element, pad_name); if (!pad) { g_printerr ("Could not retrieve pad '%s'\n", pad_name); return; } /* Retrieve negotiated caps (or acceptable caps if negotiation is not finished yet) */ caps = gst_pad_get_current_caps (pad); if (!caps) caps = gst_pad_query_caps (pad, NULL); /* Print and free */ g_print ("Caps for the %s pad:\n", pad_name); print_caps (caps, " "); gst_caps_unref (caps); gst_object_unref (pad); } int main(int argc, char *argv[]) { GstElement *pipeline, *source, *sink; GstElementFactory *source_factory, *sink_factory; GstBus *bus; GstMessage *msg; GstStateChangeReturn ret; gboolean terminate = FALSE; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the element factories */ source_factory = gst_element_factory_find ("audiotestsrc"); sink_factory = gst_element_factory_find ("autoaudiosink"); if (!source_factory || !sink_factory) { g_printerr ("Not all element factories could be created.\n"); return -1; } /* Print information about the pad templates of these factories */ print_pad_templates_information (source_factory); print_pad_templates_information (sink_factory); /* Ask the factories to instantiate actual elements */ source = gst_element_factory_create (source_factory, "source"); sink = gst_element_factory_create (sink_factory, "sink"); /* Create the empty pipeline */ pipeline = gst_pipeline_new ("test-pipeline"); if (!pipeline || !source || !sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Build the pipeline */ gst_bin_add_many (GST_BIN (pipeline), source, sink, NULL); if (gst_element_link (source, sink) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (pipeline); return -1; } /* Print initial negotiated caps (in NULL state) */ g_print ("In NULL state:\n"); print_pad_capabilities (sink, "sink"); /* Start playing */ ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state (check the bus for error messages).\n"); } /* Wait until error, EOS or State Change */ bus = gst_element_get_bus (pipeline); do { msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS | GST_MESSAGE_STATE_CHANGED); /* Parse message */ if (msg != NULL) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); terminate = TRUE; break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); terminate = TRUE; break; case GST_MESSAGE_STATE_CHANGED: /* We are only interested in state-changed messages from the pipeline */ if (GST_MESSAGE_SRC (msg) == GST_OBJECT (pipeline)) { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); g_print ("\nPipeline state changed from %s to %s:\n", gst_element_state_get_name (old_state), gst_element_state_get_name (new_state)); /* Print the current capabilities of the sink element */ print_pad_capabilities (sink, "sink"); } break; default: /* We should not reach here because we only asked for ERRORs, EOS and STATE_CHANGED */ g_printerr ("Unexpected message received.\n"); break; } gst_message_unref (msg); } } while (!terminate); /* Free resources */ gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); gst_object_unref (source_factory); gst_object_unref (sink_factory); return 0; } ``` ## 工作流 `print_pad_capabilities`, `print_caps`,`print_pad_templates`以一种友好的形式简单展示了capabilities的结构体。假如你想了解GstCaps的内部结构,请阅读[GstCaps]()。 ```c /* Shows the CURRENT capabilities of the requested pad in the given element */ static void print_pad_capabilities (GstElement *element, gchar *pad_name) { GstPad *pad = NULL; GstCaps *caps = NULL; /* Retrieve pad */ pad = gst_element_get_static_pad (element, pad_name); if (!pad) { g_printerr ("Could not retrieve pad '%s'\n", pad_name); return; } /* Retrieve negotiated caps (or acceptable caps if negotiation is not finished yet) */ caps = gst_pad_get_current_caps (pad); if (!caps) caps = gst_pad_query_caps (pad, NULL); /* Print and free */ g_print ("Caps for the %s pad:\n", pad_name); print_caps (caps, " "); gst_caps_unref (caps); gst_object_unref (pad); } ``` `gst_element_get_static_pad()`用于根据Pad name检索给定element的pad结构体,这个pad是静态的,因为它会一直存在。关于Pad的的更多内容请阅读[GstPad]()。 获取pad之后我们调用`gst_pad_get_current_caps()`来获取这个pad当前的capabilities,可能是固定的也可能不是,这取决于当前“协商”过程的状态。pad甚至可能还未生成capabilities,在这种情况下,我们调用`gst_pad_query_caps()`来获取一个当前可接受的Pad Capabilities。这个当前可接受的Caps是Pad Template在`NULL`状态下的Caps,它不是固定的,因为还会查询实际的硬件。 然后我们打印这些获得的Capabilities信息。 ```c /* Create the element factories */ source_factory = gst_element_factory_find ("audiotestsrc"); sink_factory = gst_element_factory_find ("autoaudiosink"); if (!source_factory || !sink_factory) { g_printerr ("Not all element factories could be created.\n"); return -1; } /* Print information about the pad templates of these factories */ print_pad_templates_information (source_factory); print_pad_templates_information (sink_factory); /* Ask the factories to instantiate actual elements */ source = gst_element_factory_create (source_factory, "source"); sink = gst_element_factory_create (sink_factory, "sink"); ``` 在之前的教程中我们使用`gst_element_factory_make()`来创建GStreamer element并且跳过了factories的讨论,可以明确的是一个`GstElementFactory`管理着一个特定类型的GStreamer element的实例化,以factory name区分(可以理解为一个`GstElementFactory`代表一个插件,一个插件可以实例化多个GStreamer element对象)。 `gst_element_factory_make()`是`gst_element_factory_create()`和`gst_element_factory_create()`的简洁形式。 通过工程,Pad模板实际上已经可以访问了,所以factories一建立我们立刻打印这些信息。 我们跳过pipeline的创建和启动部分,直接跳到状态切换消息的处理: ```c case GST_MESSAGE_STATE_CHANGED: /* We are only interested in state-changed messages from the pipeline */ if (GST_MESSAGE_SRC (msg) == GST_OBJECT (pipeline)) { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); g_print ("\nPipeline state changed from %s to %s:\n", gst_element_state_get_name (old_state), gst_element_state_get_name (new_state)); /* Print the current capabilities of the sink element */ print_pad_capabilities (sink, "sink"); } break; ``` 上述代码将在每次pipeline状态变化时打印`autoaudiosink`的`sink pad`。在输出中你能看到一个最初的caps (Pad Template的caps)是如何逐步完善的知道它们完全固定(Caps只包含一个无范围的类型)。 ## 总结 这篇教程展示了: - 什么是`Pad Capabilities`和`Pad Template Capabilities`。 - 如何使用`gst_pad_get_current_caps()`和`get_pad_query_caps()`检索它们。 - 它们根据pipeline的不同状态有不同的含义(在初始化时它们表示所有可能的Capabilities,在这之后表示当前Pad的特定Caps)。 - 事先知道elements支持的Caps类型对于elements的连接至关重要。 - 可以使用`gst_inspect-1.0`查看element支持的Pad Caps。 ================================================ FILE: basic_theory/basic_tutorial/multithread.md ================================================ # Basic tutorial 7: Multithreading and Pad Availability ## 目标 GStreamer自动处理多线程,但是在某些情况下,用户可能需要手动解耦线程。这篇教程将展示如何解耦线程以及完善关于Pad Availability的描述。更准确来说,这篇文档解释了: - 如何为pipeline的某些部分创建新的线程。 - 什么是Pad Availability。 - 如何复制流。 ## 介绍 ### Multithreading GStreamer是一个多线程的框架,这意味着在内部,它根据需要创建和销毁线程,例如,将流的处理从应用程序线程解耦。此外,插件也可以自由创建线程来处理它们的任务,例如视频解码器可以创建四个线程以充分利用CPU的四个核。 除此以外,应用程序在创建pipeline的时候可以明确的指定它的一个分支(pipeline的一部分)运行在不同的线程上(例如同时进行音频和视频的解码)。 这使用`queue`插件完成,它的sink pad只负责将数据入队,并且在另一个线程中src pad将数据出队并传递给其余插件。这个插件同样可以用来做缓冲机制,这点在后面讲述流的教程中可以看到,`queue`内部队列的长度可以通过属性来设置。 ### The example pipeline ![img](images/multithread/basic-tutorial-7.png) 程序的源是合成音频信号(连续的音调),它被`tee`分离(`tee`将从sink pad中接收到的所有东西通过src pad发送出去)。一个分支将信号传递给声卡,并外一个分支将波形渲染成视频并发送给显示屏。 如上图所示,queue创建了一个新的线程,所以整条pipeline有三个线程。含有多个sink element的pipeline通常是多线程的,因为为了同步多个sink元素通常会互相阻塞直到所有的sink准备好,假如是单线程运行那么它们将被第一个sink阻塞住。 ### Request pads 在[Basic tutorail 3: Dynamic pipelines](https://ricardolu.gitbook.io/gstreamer/basic-theory/basic-tutorial-3-dynamic-pipelines)我们了解到`uridecodebin`这个插件在最开始是没有src pad的,直到数据开始传递并且`uridecodebin`知道媒体类型才出现,这类pad被称为`Sometimes Pads`,而通常一直可用的pad被称作`Always Pads`。 还有一类pad是`Request Pad`,这类pad是按需创建的。最典型的例子就是`tee`,它只有sink pad而没有初始化的src pads:它们需要被申请然后`tee`才会添加它们。在这种情况下,一个输入的流可以被复制任意次数。缺点是Request Pad和其他element的连接和sometimes pads一样,需要手动完成。 另外,在PLAYING或PAUSED状态下去申请(或释放)pad需要注意(Pad阻塞,本教程没有讲到这点),在NULL和READY状态去获得pad是安全的。 ## Simple multithreaded example ### basic-tutorial-7.c ```c #include int main(int argc, char *argv[]) { GstElement *pipeline, *audio_source, *tee, *audio_queue, *audio_convert, *audio_resample, *audio_sink; GstElement *video_queue, *visual, *video_convert, *video_sink; GstBus *bus; GstMessage *msg; GstPad *tee_audio_pad, *tee_video_pad; GstPad *queue_audio_pad, *queue_video_pad; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ audio_source = gst_element_factory_make ("audiotestsrc", "audio_source"); tee = gst_element_factory_make ("tee", "tee"); audio_queue = gst_element_factory_make ("queue", "audio_queue"); audio_convert = gst_element_factory_make ("audioconvert", "audio_convert"); audio_resample = gst_element_factory_make ("audioresample", "audio_resample"); audio_sink = gst_element_factory_make ("autoaudiosink", "audio_sink"); video_queue = gst_element_factory_make ("queue", "video_queue"); visual = gst_element_factory_make ("wavescope", "visual"); video_convert = gst_element_factory_make ("videoconvert", "csp"); video_sink = gst_element_factory_make ("autovideosink", "video_sink"); /* Create the empty pipeline */ pipeline = gst_pipeline_new ("test-pipeline"); if (!pipeline || !audio_source || !tee || !audio_queue || !audio_convert || !audio_resample || !audio_sink || !video_queue || !visual || !video_convert || !video_sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Configure elements */ g_object_set (audio_source, "freq", 215.0f, NULL); g_object_set (visual, "shader", 0, "style", 1, NULL); /* Link all elements that can be automatically linked because they have "Always" pads */ gst_bin_add_many (GST_BIN (pipeline), audio_source, tee, audio_queue, audio_convert, audio_resample, audio_sink, video_queue, visual, video_convert, video_sink, NULL); if (gst_element_link_many (audio_source, tee, NULL) != TRUE || gst_element_link_many (audio_queue, audio_convert, audio_resample, audio_sink, NULL) != TRUE || gst_element_link_many (video_queue, visual, video_convert, video_sink, NULL) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (pipeline); return -1; } /* Manually link the Tee, which has "Request" pads */ tee_audio_pad = gst_element_request_pad_simple (tee, "src_%u"); g_print ("Obtained request pad %s for audio branch.\n", gst_pad_get_name (tee_audio_pad)); queue_audio_pad = gst_element_get_static_pad (audio_queue, "sink"); tee_video_pad = gst_element_request_pad_simple (tee, "src_%u"); g_print ("Obtained request pad %s for video branch.\n", gst_pad_get_name (tee_video_pad)); queue_video_pad = gst_element_get_static_pad (video_queue, "sink"); if (gst_pad_link (tee_audio_pad, queue_audio_pad) != GST_PAD_LINK_OK || gst_pad_link (tee_video_pad, queue_video_pad) != GST_PAD_LINK_OK) { g_printerr ("Tee could not be linked.\n"); gst_object_unref (pipeline); return -1; } gst_object_unref (queue_audio_pad); gst_object_unref (queue_video_pad); /* Start playing the pipeline */ gst_element_set_state (pipeline, GST_STATE_PLAYING); /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Release the request pads from the Tee, and unref them */ gst_element_release_request_pad (tee, tee_audio_pad); gst_element_release_request_pad (tee, tee_video_pad); gst_object_unref (tee_audio_pad); gst_object_unref (tee_video_pad); /* Free resources */ if (msg != NULL) gst_message_unref (msg); gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); return 0; } ``` ## 工作流 ```c /* Create the elements */ audio_source = gst_element_factory_make ("audiotestsrc", "audio_source"); tee = gst_element_factory_make ("tee", "tee"); audio_queue = gst_element_factory_make ("queue", "audio_queue"); audio_convert = gst_element_factory_make ("audioconvert", "audio_convert"); audio_resample = gst_element_factory_make ("audioresample", "audio_resample"); audio_sink = gst_element_factory_make ("autoaudiosink", "audio_sink"); video_queue = gst_element_factory_make ("queue", "video_queue"); visual = gst_element_factory_make ("wavescope", "visual"); video_convert = gst_element_factory_make ("videoconvert", "video_convert"); video_sink = gst_element_factory_make ("autovideosink", "video_sink"); ``` 上述pipeline示例图中的所有elements都在这完成实例化。 `audiotestsrc`生成连续的音调。`wavescope`消费一个音频信号并且将它渲染成音波(可以将它看作一个简易的示波器)。`autoaudiosink`和`autovideosink`在前文介绍过了。 转换element(`audioconvert`,`audioresample`和`videoconvert`)也是必须的,它们可以保证pipeline可以正确地连接。事实上,音频和视频的sink的Caps是由硬件确定的,所以你在设计时是不知道`audiotestsrc`和`wavescope`是否可以匹配上。如果Caps能够匹配,这些element的行为就类似于直通——对信号不做任何修改,这对于性能的影响基本可以忽略不计。 ```c /* Configure elements */ g_object_set (audio_source, "freq", 215.0f, NULL); g_object_set (visual, "shader", 0, "style", 1, NULL); ``` 为了更好的演示做了小小的调整:audiotestsrc的“freq”属性设置成215Hz,wavescope设置“shader”和“style”,让波形连续。用gst-inspect可以更好的了解这几个element的属性。 ```c /* Link all elements that can be automatically linked because they have "Always" pads */ gst_bin_add_many (GST_BIN (pipeline), audio_source, tee, audio_queue, audio_convert, audio_sink, video_queue, visual, video_convert, video_sink, NULL); if (gst_element_link_many (audio_source, tee, NULL) != TRUE || gst_element_link_many (audio_queue, audio_convert, audio_sink, NULL) != TRUE || gst_element_link_many (video_queue, visual, video_convert, video_sink, NULL) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (pipeline); return -1; } ``` 这块代码在pipeline里加入了所有的element并且把可以自动连接的element都连接了起来(就是Always Pad)。 注:事实上可以直接使用`gst_element_link_many()`连接`Request Pads`,它会在内部申请Pads所以用户不需要担心连接的elment具有`Always Pads`和`Request Pads`,但这并不方便,因为最终总是要释放申请的Pad而使用`get_element_link_many()`会很容易忽略这点。因此建议的做法是始终手动请求`Request Pads`,避免麻烦。 ```c /* Manually link the Tee, which has "Request" pads */ tee_audio_pad = gst_element_request_pad_simple (tee, "src_%u"); g_print ("Obtained request pad %s for audio branch.\n", gst_pad_get_name (tee_audio_pad)); queue_audio_pad = gst_element_get_static_pad (audio_queue, "sink"); tee_video_pad = gst_element_request_pad_simple (tee, "src_%u"); g_print ("Obtained request pad %s for video branch.\n", gst_pad_get_name (tee_video_pad)); queue_video_pad = gst_element_get_static_pad (video_queue, "sink"); if (gst_pad_link (tee_audio_pad, queue_audio_pad) != GST_PAD_LINK_OK || gst_pad_link (tee_video_pad, queue_video_pad) != GST_PAD_LINK_OK) { g_printerr ("Tee could not be linked.\n"); gst_object_unref (pipeline); return -1; } gst_object_unref (queue_audio_pad); gst_object_unref (queue_video_pad); ``` 为了连接Request Pad,需要获得对element的申请一个pad。一个element可能可以创建不同种类的Request Pad,所以,当请求Pad生成时,必须提供想要的Pad模板。Pad模板可以`gst_element_class_get_pad_template()`方法来获得,而且用它们的名字来区分开。在tee element的文档里面我们可以看到两个pad模板,分别被称为`sink`(`sink pad`)和`src%_u`(`Request Pad`)。我们使用`gst_element_request_pad()`方法向`tee`请求两个Pad——分别给音频分支和视频分支。 然后我们去获得需要连接`Request Pad`的下游element(`queue`)的`sink Pad`,这些通常都是`Always Pad`,所以我们用`get_element_get_static_pad()`方法去获得。 最后,我们用`gst_pad_link()`方法把pad连接起来。在`gst_element_link()`和`gst_element_link_many()`方法里面也是调用这个函数来连接的。 我们请求的`queue`的`sink pad`需要通过`gst_object_unref()`来释放。`Request Pad`是在我们不需要的时候释放,也就是在程序的最后。 就像平常一样,我们设置pipeline到`PLAYING`状态,等待一个错误消息或者`EOS`消息到达。剩下的所有事情就是释放请求的Pads。 ```c /* Release the request pads from the Tee, and unref them */ gst_element_release_request_pad (tee, tee_audio_pad); gst_element_release_request_pad (tee, tee_video_pad); gst_object_unref (tee_audio_pad); gst_object_unref (tee_video_pad); ``` `gst_element_release_request_pad()`可以释放`tee`的pad,但还需要调用`gst_object_unref()`减少pad的引用计数(释放)才行。 ## 总结 这篇教程展示了: - 如何使用`queue`在不同线程上运行pipeline的一部分。 - 什么是`Request Pad`以及如何使用`gst_element_request_pad_simple()`,`gst_pad_link()`和`gst_element_release_request_pad()`将elements和`Request Pads`连接。 - 如何使用`tee`复制stream。 下一篇教程将在构建本教程pipeline的基础上展示如何向一条正在运行的pipeline中插入和提取数据。 ================================================ FILE: basic_theory/basic_tutorial/short_cutting_pipeline.md ================================================ # Basic tutorial 8: Short-cutting the pipeline ## 目标 GStreamer构造的pipeline不需要完全封闭,有几种方式允许用户在任意时间向pipeline注入或提取数据。本教程将展示: - 如何将外部数据注入通用的GStreamer pipeline。 - 如何从通用的GStreamer pipeline中提取数据。 - 如何访问和操作从GStreamer pipeline中取出的数据。 [Playback tutorial 3: Short-cutting the pipeline]()中使用基于playbin的pipeline以另外一种方式实现了相同的目标。 ## 介绍 应用程序可以通过几种方式与流经GStreamer pipeline的数据交互。本教程将展示最简单的一种,因为使用了为这一目的所设计的element:`appsink`和`appsrc`。 ### Buffer 数据以称为buffers(缓冲区)的块的形式通过GStreamer管道传输。由于本例生产和消费数据,我们有必要了解GstBuffer。 Source pads生产buffer,而sink pads消费buffer,GStreamer接受这些buffer并将它们从一个element传递到另一个element。 一个buffer仅代表一个数据单元,用户不应该假设: - 所有的buffers拥有相同的大小 - 一个buffer进入一个element就会有一个buffer从这个element出来 Elements可以随意处理它们接收到的buffer。 GstBuffer可能包含不止一个世纪的内存缓冲区,实际的内存缓冲区是使用GstMemory对象抽象出来的,一个GstBuffer可以包含多个GstMemory对象。 每个buffer都附有时间戳和持续时间,描述了buffer内容应该被解码、渲染或播放的时刻。事实上时间戳是一个非常复杂而微妙的主题但目前这个简单的解释已经足够了。 举例来说,一个`filesrc`(可以读取文件的GStreamer element)插件生产的buffer具有`ANY`类型的caps和无时间戳信息。而经过解复用(详见[Basic tutorial 3: Dynamic pipelines](https://ricardolu.gitbook.io/gstreamer/basic-theory/basic-tutorial-3-dynamic-pipelines))之后buffer将拥有一些特殊的caps,例如`video/x-h264`。在经过解码之后,每一个buffer都将含有一帧具有原始caps的视频帧,例如`video/x-raw-yuv`以及非常准确的时间戳,标记了这一阵将被播放的时间。 ## 教程 本教程是[Basic tutorial 7: Multithreading and Pad Availability]()的拓展,主要包含两个方面: - `audiotesetsrc`被`appsrc`取代,音频数据将由`appsrc`产生。 - `tee`插件增加了一个分支,因此流入audio sink和波形显示的数据也被复制了一份传给`appsink`。 `appsink`会将信息回传到应用程序中,在本教程中仅仅是通知用户收到了新的数据,但是显然`appsink`可以处理更复杂的任务。 ![img](images/short_cutting_pipeline/basic-tutorial-8.png) ## A crude waveform generator ### basic-tutorial-8.c ```c #include #include #include #define CHUNK_SIZE 1024 /* Amount of bytes we are sending in each buffer */ #define SAMPLE_RATE 44100 /* Samples per second we are sending */ /* Structure to contain all our information, so we can pass it to callbacks */ typedef struct _CustomData { GstElement *pipeline, *app_source, *tee, *audio_queue, *audio_convert1, *audio_resample, *audio_sink; GstElement *video_queue, *audio_convert2, *visual, *video_convert, *video_sink; GstElement *app_queue, *app_sink; guint64 num_samples; /* Number of samples generated so far (for timestamp generation) */ gfloat a, b, c, d; /* For waveform generation */ guint sourceid; /* To control the GSource */ GMainLoop *main_loop; /* GLib's Main Loop */ } CustomData; /* This method is called by the idle GSource in the mainloop, to feed CHUNK_SIZE bytes into appsrc. * The idle handler is added to the mainloop when appsrc requests us to start sending data (need-data signal) * and is removed when appsrc has enough data (enough-data signal). */ static gboolean push_data (CustomData *data) { GstBuffer *buffer; GstFlowReturn ret; int i; GstMapInfo map; gint16 *raw; gint num_samples = CHUNK_SIZE / 2; /* Because each sample is 16 bits */ gfloat freq; /* Create a new empty buffer */ buffer = gst_buffer_new_and_alloc (CHUNK_SIZE); /* Set its timestamp and duration */ GST_BUFFER_TIMESTAMP (buffer) = gst_util_uint64_scale (data->num_samples, GST_SECOND, SAMPLE_RATE); GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale (num_samples, GST_SECOND, SAMPLE_RATE); /* Generate some psychodelic waveforms */ gst_buffer_map (buffer, &map, GST_MAP_WRITE); raw = (gint16 *)map.data; data->c += data->d; data->d -= data->c / 1000; freq = 1100 + 1000 * data->d; for (i = 0; i < num_samples; i++) { data->a += data->b; data->b -= data->a / freq; raw[i] = (gint16)(500 * data->a); } gst_buffer_unmap (buffer, &map); data->num_samples += num_samples; /* Push the buffer into the appsrc */ g_signal_emit_by_name (data->app_source, "push-buffer", buffer, &ret); /* Free the buffer now that we are done with it */ gst_buffer_unref (buffer); if (ret != GST_FLOW_OK) { /* We got some error, stop sending data */ return FALSE; } return TRUE; } /* This signal callback triggers when appsrc needs data. Here, we add an idle handler * to the mainloop to start pushing data into the appsrc */ static void start_feed (GstElement *source, guint size, CustomData *data) { if (data->sourceid == 0) { g_print ("Start feeding\n"); data->sourceid = g_idle_add ((GSourceFunc) push_data, data); } } /* This callback triggers when appsrc has enough data and we can stop sending. * We remove the idle handler from the mainloop */ static void stop_feed (GstElement *source, CustomData *data) { if (data->sourceid != 0) { g_print ("Stop feeding\n"); g_source_remove (data->sourceid); data->sourceid = 0; } } /* The appsink has received a buffer */ static GstFlowReturn new_sample (GstElement *sink, CustomData *data) { GstSample *sample; /* Retrieve the buffer */ g_signal_emit_by_name (sink, "pull-sample", &sample); if (sample) { /* The only thing we do in this example is print a * to indicate a received buffer */ g_print ("*"); gst_sample_unref (sample); return GST_FLOW_OK; } return GST_FLOW_ERROR; } /* This function is called when an error message is posted on the bus */ static void error_cb (GstBus *bus, GstMessage *msg, CustomData *data) { GError *err; gchar *debug_info; /* Print error details on the screen */ gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); g_main_loop_quit (data->main_loop); } int main(int argc, char *argv[]) { CustomData data; GstPad *tee_audio_pad, *tee_video_pad, *tee_app_pad; GstPad *queue_audio_pad, *queue_video_pad, *queue_app_pad; GstAudioInfo info; GstCaps *audio_caps; GstBus *bus; /* Initialize custom data structure */ memset (&data, 0, sizeof (data)); data.b = 1; /* For waveform generation */ data.d = 1; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ data.app_source = gst_element_factory_make ("appsrc", "audio_source"); data.tee = gst_element_factory_make ("tee", "tee"); data.audio_queue = gst_element_factory_make ("queue", "audio_queue"); data.audio_convert1 = gst_element_factory_make ("audioconvert", "audio_convert1"); data.audio_resample = gst_element_factory_make ("audioresample", "audio_resample"); data.audio_sink = gst_element_factory_make ("autoaudiosink", "audio_sink"); data.video_queue = gst_element_factory_make ("queue", "video_queue"); data.audio_convert2 = gst_element_factory_make ("audioconvert", "audio_convert2"); data.visual = gst_element_factory_make ("wavescope", "visual"); data.video_convert = gst_element_factory_make ("videoconvert", "video_convert"); data.video_sink = gst_element_factory_make ("autovideosink", "video_sink"); data.app_queue = gst_element_factory_make ("queue", "app_queue"); data.app_sink = gst_element_factory_make ("appsink", "app_sink"); /* Create the empty pipeline */ data.pipeline = gst_pipeline_new ("test-pipeline"); if (!data.pipeline || !data.app_source || !data.tee || !data.audio_queue || !data.audio_convert1 || !data.audio_resample || !data.audio_sink || !data.video_queue || !data.audio_convert2 || !data.visual || !data.video_convert || !data.video_sink || !data.app_queue || !data.app_sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Configure wavescope */ g_object_set (data.visual, "shader", 0, "style", 0, NULL); /* Configure appsrc */ gst_audio_info_set_format (&info, GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL); audio_caps = gst_audio_info_to_caps (&info); g_object_set (data.app_source, "caps", audio_caps, "format", GST_FORMAT_TIME, NULL); g_signal_connect (data.app_source, "need-data", G_CALLBACK (start_feed), &data); g_signal_connect (data.app_source, "enough-data", G_CALLBACK (stop_feed), &data); /* Configure appsink */ g_object_set (data.app_sink, "emit-signals", TRUE, "caps", audio_caps, NULL); g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data); gst_caps_unref (audio_caps); /* Link all elements that can be automatically linked because they have "Always" pads */ gst_bin_add_many (GST_BIN (data.pipeline), data.app_source, data.tee, data.audio_queue, data.audio_convert1, data.audio_resample, data.audio_sink, data.video_queue, data.audio_convert2, data.visual, data.video_convert, data.video_sink, data.app_queue, data.app_sink, NULL); if (gst_element_link_many (data.app_source, data.tee, NULL) != TRUE || gst_element_link_many (data.audio_queue, data.audio_convert1, data.audio_resample, data.audio_sink, NULL) != TRUE || gst_element_link_many (data.video_queue, data.audio_convert2, data.visual, data.video_convert, data.video_sink, NULL) != TRUE || gst_element_link_many (data.app_queue, data.app_sink, NULL) != TRUE) { g_printerr ("Elements could not be linked.\n"); gst_object_unref (data.pipeline); return -1; } /* Manually link the Tee, which has "Request" pads */ tee_audio_pad = gst_element_request_pad_simple (data.tee, "src_%u"); g_print ("Obtained request pad %s for audio branch.\n", gst_pad_get_name (tee_audio_pad)); queue_audio_pad = gst_element_get_static_pad (data.audio_queue, "sink"); tee_video_pad = gst_element_request_pad_simple (data.tee, "src_%u"); g_print ("Obtained request pad %s for video branch.\n", gst_pad_get_name (tee_video_pad)); queue_video_pad = gst_element_get_static_pad (data.video_queue, "sink"); tee_app_pad = gst_element_request_pad_simple (data.tee, "src_%u"); g_print ("Obtained request pad %s for app branch.\n", gst_pad_get_name (tee_app_pad)); queue_app_pad = gst_element_get_static_pad (data.app_queue, "sink"); if (gst_pad_link (tee_audio_pad, queue_audio_pad) != GST_PAD_LINK_OK || gst_pad_link (tee_video_pad, queue_video_pad) != GST_PAD_LINK_OK || gst_pad_link (tee_app_pad, queue_app_pad) != GST_PAD_LINK_OK) { g_printerr ("Tee could not be linked\n"); gst_object_unref (data.pipeline); return -1; } gst_object_unref (queue_audio_pad); gst_object_unref (queue_video_pad); gst_object_unref (queue_app_pad); /* Instruct the bus to emit signals for each received message, and connect to the interesting signals */ bus = gst_element_get_bus (data.pipeline); gst_bus_add_signal_watch (bus); g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, &data); gst_object_unref (bus); /* Start playing the pipeline */ gst_element_set_state (data.pipeline, GST_STATE_PLAYING); /* Create a GLib Main Loop and set it to run */ data.main_loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (data.main_loop); /* Release the request pads from the Tee, and unref them */ gst_element_release_request_pad (data.tee, tee_audio_pad); gst_element_release_request_pad (data.tee, tee_video_pad); gst_element_release_request_pad (data.tee, tee_app_pad); gst_object_unref (tee_audio_pad); gst_object_unref (tee_video_pad); gst_object_unref (tee_app_pad); /* Free resources */ gst_element_set_state (data.pipeline, GST_STATE_NULL); gst_object_unref (data.pipeline); return 0; } ``` ## 工作流 例程的131-205行创建了一条[Basic tutorial 7: Multithreading and Pad Availability]()中pipeline的拓展版本,包括实例化所有elements,自动连接所有具有`Always Pads`的elements,手动连接从`tee`中申请的`Request Pads`。 ### appsrc ```c /* Configure appsrc */ gst_audio_info_set_format (&info, GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL); audio_caps = gst_audio_info_to_caps (&info); g_object_set (data.app_source, "caps", audio_caps, NULL); g_signal_connect (data.app_source, "need-data", G_CALLBACK (start_feed), &data); g_signal_connect (data.app_source, "enough-data", G_CALLBACK (stop_feed), &data); ``` `appsrc`首要设置的属性就是它的caps,它指定了`appsrc`将生成的数据类型,以便GStreamer可以检查它是否能够和下游elements连接(下游elements能否处理这种数据)。caps属性值必须是GstCaps对象,GstCaps对象可以使用`gst_caps_from_string()`解析一个字符串对象来构建。 我们连接了`appsrc`的`need-data`和`enough-data`信号,它们将在`appsrc`内部队列数据不足或快满时分别被触发。本教程使用这两个信号分别启动/停止信号发生过程。 ### appsink ```c /* Configure appsink */ g_object_set (data.app_sink, "emit-signals", TRUE, "caps", audio_caps, NULL); g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data); gst_caps_unref (audio_caps); ``` 我们连接了`appsin`的`new-sample`信号,每当`appsink`到数据的时候就会触发这个信号。与`appsrc`不同,`appsink`的`emit-signals`属性的默认值为`false`,因此我们需要将它设置为`true`以便`appsink`能够正常发出`new-sample`信号。 启动pipeline,等待消息和最后的清理资源都和以前的没什么区别。下面主要讲解注册的回调函数: ### need-data ```c /* This signal callback triggers when appsrc needs data. Here, we add an idle handler * to the mainloop to start pushing data into the appsrc */ static void start_feed (GstElement *source, guint size, CustomData *data) { if (data->sourceid == 0) { g_print ("Start feeding\n"); data->sourceid = g_idle_add ((GSourceFunc) push_data, data); } } ``` 当`appsrc`的内部队列缺乏数据的时候就会触发上述回调,在这个回调函数中唯一做的事就是使用`g_idle_add()`注册了一个GLib空闲函数,在空闲函数中将不断向`appsrc`的传递数据只知道它的内部队列队满。GLib空闲函数是当它的主循环处于“空闲”状态时将被调用的方法,也就是说当前没有更高优先级的任务需要执行。调用GLib空闲函数需要用户线初始化并启动一个`GMainLoop`(推荐阅读[GMainLoop]()以获得更多关于`GMainLoop`的信息)。 这时`appsrc`允许的多种方法中的一个。事实上buffer并不需要使用GLib从主线程传递给`appsrc`,也不一定需要使用`need-data`和`enough-data`信号来与`appsrc`同步(据说是最方便的)。 **注:**如前文所说,流由GStreamer的单独线程处理,在实际的应用程序开发中appsrc的数据来源总是其他线程,数据的消耗有应用程序自行管理,通常消耗数据的速度足够快因此并不特别处理appsrc的enough-data信号。 我们维护`g_idle_add()`返回的`sourceid`,稍后需要禁用它。 ### enough-data ```c /* This callback triggers when appsrc has enough data and we can stop sending. * We remove the idle handler from the mainloop */ static void stop_feed (GstElement *source, CustomData *data) { if (data->sourceid != 0) { g_print ("Stop feeding\n"); g_source_remove (data->sourceid); data->sourceid = 0; } } ``` 这个函数当appsrc内部的队列满的时候调用,所以我们需要停止发送数据。这里我们简单地用g_source_remove()来把idle函数注销。 ### push-buffer ```c /* This method is called by the idle GSource in the mainloop, to feed CHUNK_SIZE bytes into appsrc. * The ide handler is added to the mainloop when appsrc requests us to start sending data (need-data signal) * and is removed when appsrc has enough data (enough-data signal). */ static gboolean push_data (CustomData *data) { GstBuffer *buffer; GstFlowReturn ret; int i; gint16 *raw; gint num_samples = CHUNK_SIZE / 2; /* Because each sample is 16 bits */ gfloat freq; /* Create a new empty buffer */ buffer = gst_buffer_new_and_alloc (CHUNK_SIZE); /* Set its timestamp and duration */ GST_BUFFER_TIMESTAMP (buffer) = gst_util_uint64_scale (data->num_samples, GST_SECOND, SAMPLE_RATE); GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale (num_samples, GST_SECOND, SAMPLE_RATE); /* Generate some psychodelic waveforms */ raw = (gint16 *)GST_BUFFER_DATA (buffer); ``` 我们使用上述函数向`appsrc`传递数据,GLib将以自己的频率和速度调用它(调用不受用户控制),但是我们连接了`enough-data`信号以确保`appsrc`队满的时候能够停掉它。 它的第一个任务是使用`gst_buffer_new_and_allocate()`申请了一个给定大小的GstBuffer(在这个例子中是1024字节)。 我们计算我们生成的采样数据的数据量,把数据存在`CustomData.num_samples`里面,这样我们可以用GstBuffer提供的`GST_BUFFER_TIMESTAMP`宏来生成buffer的时间戳。 `gst_util_uint64_scale()`是一个实用函数,用于缩放数据,确保不会溢出。 申请的buffer内存可以使用GstBuffer提供的`GST_BUFFER_DATA`宏来访问,在使用过程中要注意申请内存的大小以免操作越界。 **注:**`GST_BUFFER_DATA`等价于`gst_buffer_map (buffer, &map, GST_MAP_WRITE);`。 这里跳过波形的生成部分,因为这不是本教程要讲述的内容。 ```c /* Push the buffer into the appsrc */ g_signal_emit_by_name (data->app_source, "push-buffer", buffer, &ret); /* Free the buffer now that we are done with it */ gst_buffer_unref (buffer); ``` 一旦我们的buffer已经准备好,我们把带着使用`push-buffer`动作信号将这个buffer传给`appsrc`,然后就调用`gst_buffer_unref()`方法,因为我们不会再用到它了。 ### new-sample ```c /* The appsink has received a buffer */ static GstFlowReturn new_sample (GstElement *sink, CustomData *data) { GstSample *sample; /* Retrieve the buffer */ g_signal_emit_by_name (sink, "pull-sample", &sample); if (sample) { /* The only thing we do in this example is print a * to indicate a received buffer */ g_print ("*"); gst_sample_unref (sample); return GST_FLOW_OK; } return GST_FLOW_ERROR; } ``` 最后,这个函数将在appsin接收到buffer数据的时候调用,我们使用`pull-sample`动作信号来获取buffer,然后向屏幕输出一个`*`以说明appsink成功接收到数据。我们可以使用GstBuffer提供的`GST_BUFFER_DATA`宏获取buffer的数据指针,使用`GST_BUFFER_SIZE`宏来获取buffer的数据大小。注意`appsink`接收到的buffer不一定和`push_data`函数中生成的buffer一致,因为在这个pipeline分支路径上的任何elements都能够修改经过它的buffer(但不是这个例子,本例中buffer仅经过一个`tee`,而`tee`并未改动buffer的内容)。 随着我们使用`gst_buffer_unref()`释放buffer,本教程也到此为止。 ## 总结 这篇教程展示了应用程序如何: - 如何使用`appsrc `元素向pipeline插入数据。 - 如何使用`appsink`元素从pipeline中检索数据。 - 如何通过GstBuffer操作从pipeline中取出的数据。 [Playback tutorial 3: Short-cutting the pipeline]()中使用基于playbin的pipeline以另外一种方式实现了相同的目标。 **注:**关于应用程序与GStreamer pipeline的数据交互,可以阅读[GStreamer App](https://ricardolu.gitbook.io/gstreamer/application-development/app)以获得更多实用信息。 原文地址:https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html# ================================================ FILE: basic_theory/playback/hardware_decode.md ================================================ # Playback tutorial 8: Hardware-accelerated video decodingHardware-accelerated video decoding. ### Goal 随着低功耗设备变得越来越普遍,硬件加速视频解码已经迅速变为一种必需特性。这篇教程(实际上更像一篇讲义)阐述了硬件加速的一些背景以及GStreamer如何从中收益。 如果正确设置,你不需要做任何特殊工作来激活硬件加速;GStreamer将自动启用它。 ### Introduction 对于CPU而言,视频解码是一个十分耗费资源的任务,尤其是针对1080p及以上的高分辨率视频。幸运的是,现代的图形卡都配备了可编程GPU,能够处理这部分工作,允许CPU关注其他任务。对于低功耗的CPU来说,拥有专用的硬件是必要的,因为这些CPU根本无法足够快地解码这些媒体。 在目前的情况下(2016年6月)各个GPU厂商都提供了不同的方法(API)来访问它们的硬件,而一个强大的行业标准还没有出现。 注:接下来的部分为各大芯片厂商的硬件编解码协议的简要介绍,这里不做翻译,因为对于有这部分需求的用户而言这些内容其实没有太大的价值。 ### Inner workings of hardware-accelerated video decoding plugins 这些API通常提供许多功能,例如视频解码,预处理或解码帧的显示。同样的,这些插件(厂商维护的硬件相关的插件)通常也为这些功能提供了不同的GStreamer element,因此pipeline的构建不会受到这些因素影响。 例如,`gstreamer-vaapi`系列插件提供了`vaapidecode`,`vaapipostproc`和`vaapisink`元素,这几个元素能够通过VAAPI分别完成硬件加速解码,将裸的视频帧传递给GPU内存,将GPU帧传递给系统内存以及显示GPU帧。 区分传统的GStreamer帧(驻留在系统内存中)和硬件加速API生成的帧是很重要的。后者驻留在GPU内存中,并且GStreamer无法访问这部分内存。GPU帧通常可以映射到系统内存中,这时候可以将其视为传统的GStreamer帧,但将它们留在GPU中并从那里显示出来要高效得多。 GStreamer需要跟踪这些“硬件缓冲区”的位置,以确保传统的缓冲区仍然从一个element传递到另一个element。这些硬件缓冲区使用起来就像常规的缓冲区,但是映射它们的内容要慢得多,因为它必须从硬件加速元素使用的特殊内存中检索出来。 以上意味着,假如当前系统中支持特定的硬件加速API,并且相应的GStreamer插件也可用,类似于`playbin`这种拥有auto-plugging机制的elements可以使用硬件加速来构建pipeline;应用程序几乎不需要做任何特殊的事情来启用它。 当`playbin`必须在不同的但都可用的elements中进行选择时,例如传统的软件解码其(如`vp8dec`)和硬件加速解码器(`vaapidecode`),它通过这两个解码器element在GStreamer中注册的rank决定到底使用哪一个。rank是每个元素的一个属性,表示其优先级;`playbin`将选择符合构建完整pipeline需求且拥有最高优先级的element。 因此`playbin`是否使用硬件加速取决于所有可处理当前media type的elements的rank。于是最简单的使能或禁用硬件加速功能的方法就是改变与其相关的elements的rank,如下面代码: ```c++ static void enable_factory (const gchar *name, gboolean enable) { GstRegistry *registry = NULL; GstElementFactory *factory = NULL; registry = gst_registry_get_default (); if (!registry) return; factory = gst_element_factory_find (name); if (!factory) return; if (enable) { gst_plugin_feature_set_rank (GST_PLUGIN_FEATURE (factory), GST_RANK_PRIMARY + 1); } else { gst_plugin_feature_set_rank (GST_PLUGIN_FEATURE (factory), GST_RANK_NONE); } gst_registry_add_feature (registry, GST_PLUGIN_FEATURE (factory)); return; } ``` `enable_factory`函数的第一个参数是想要修改的element的name属性值,例如`vaapidecode`或`fluvadec`。 核心方法是`gst_plugin_feature_set_rank`,它将把请求的element factory的rank修改为期望值。为了方便起见,rank被分为NONE,MARGINAL,SECONDARY和PRIMARY四个等级,但任意数值均可。当启用一个元素时,我们将其rank设置为PRIMARY+1,因此它的rank高于其他通常具有PRIMARY rank的元素。将一个元素的rank设置为NONE将使auto-plugging机制永远不会选择它。 注:当硬件解码器有缺陷时,GStreamer开发人员经常将其排名低于软件解码器。这应该是一个警告。 ## Conclusion 这篇教程展示了GStreamer内部如何管理硬件加速视频解码。特别地, - 如果有合适的API和相应的GStreamer插件可用,应用程序不需要做任何特殊的事情来启用硬件加速。 - 硬件加速可以通过使用`gst_plugin_feature_set_rank()`改变解码元素的rank来影响auto-plugging元素对其的选择。 ================================================ FILE: basic_theory/playback/playbin.md ================================================ # Playback tutorial 1: Playbin usage ## Goal 使用`playbin`,我们可以很方便的构建一个完整的播放pipeline而不需要做太多工作。这篇教程将展示如何进一步定制`playbin`以防它的默认值不符合我们特定的需求。 在这篇教程中我们将学习: - 如何找出一个文件中包含多少个流,以及如何在这些流之间切换。 - 如何收集关于每个流的信息。 ## Introduction 往往,多音频、视频和字母流能够被嵌入在一个单独的文件中。最常见的情况是电影,它含有一个视频和音频流(立体声或5.1声道被视作单个流)。为了适应不同的语言,使用一个视频流和多个音频流的电影也越来越常见。这种情况下,用户选择一个音频流,应用程序将播放它而忽略其他的音频流。 为了能够选择适当的流,用户需要知道这些流的确切信息,例如它们的语言。这些信息以一种“metadata”的格式被内嵌在流中,这篇教程将展示如何检索它。 字幕也可以与音频和视频一起嵌入到文件中,关于字幕的处理细节将在[Playback tutorial 2: Subtitle management]()中讨论。最后,在单个文件中也可以找到多个视频流,例如,在同一场景的多个角度的DVD中,但它们有点罕见。 注:将多个流嵌入到一个单独的文件中被称为“multiplexing”或“muxing”(通常翻译为“复用”),这类文件被称为“容器”。常见的容器格式包括.mkv,.qt,.mov,.mp4,.ogg和.webm。检索容器文件中的每个流的行为被称为“demultiplexing”或“demuxing”(通常被翻译为“解复用”)。 ## The multilingual player ```c++ // playback-tutorial-1.c #include /* Structure to contain all our information, so we can pass it around */ typedef struct _CustomData { GstElement *playbin; /* Our one and only element */ gint n_video; /* Number of embedded video streams */ gint n_audio; /* Number of embedded audio streams */ gint n_text; /* Number of embedded subtitle streams */ gint current_video; /* Currently playing video stream */ gint current_audio; /* Currently playing audio stream */ gint current_text; /* Currently playing subtitle stream */ GMainLoop *main_loop; /* GLib's Main Loop */ } CustomData; /* playbin flags */ typedef enum { GST_PLAY_FLAG_VIDEO = (1 << 0), /* We want video output */ GST_PLAY_FLAG_AUDIO = (1 << 1), /* We want audio output */ GST_PLAY_FLAG_TEXT = (1 << 2) /* We want subtitle output */ } GstPlayFlags; /* Forward definition for the message and keyboard processing functions */ static gboolean handle_message (GstBus *bus, GstMessage *msg, CustomData *data); static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data); int main(int argc, char *argv[]) { CustomData data; GstBus *bus; GstStateChangeReturn ret; gint flags; GIOChannel *io_stdin; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ data.playbin = gst_element_factory_make ("playbin", "playbin"); if (!data.playbin) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Set the URI to play */ g_object_set (data.playbin, "uri", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_cropped_multilingual.webm", NULL); /* Set flags to show Audio and Video but ignore Subtitles */ g_object_get (data.playbin, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_VIDEO | GST_PLAY_FLAG_AUDIO; flags &= ~GST_PLAY_FLAG_TEXT; g_object_set (data.playbin, "flags", flags, NULL); /* Set connection speed. This will affect some internal decisions of playbin */ g_object_set (data.playbin, "connection-speed", 56, NULL); /* Add a bus watch, so we get notified when a message arrives */ bus = gst_element_get_bus (data.playbin); gst_bus_add_watch (bus, (GstBusFunc)handle_message, &data); /* Add a keyboard watch so we get notified of keystrokes */ #ifdef G_OS_WIN32 io_stdin = g_io_channel_win32_new_fd (fileno (stdin)); #else io_stdin = g_io_channel_unix_new (fileno (stdin)); #endif g_io_add_watch (io_stdin, G_IO_IN, (GIOFunc)handle_keyboard, &data); /* Start playing */ ret = gst_element_set_state (data.playbin, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (data.playbin); return -1; } /* Create a GLib Main Loop and set it to run */ data.main_loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (data.main_loop); /* Free resources */ g_main_loop_unref (data.main_loop); g_io_channel_unref (io_stdin); gst_object_unref (bus); gst_element_set_state (data.playbin, GST_STATE_NULL); gst_object_unref (data.playbin); return 0; } /* Extract some metadata from the streams and print it on the screen */ static void analyze_streams (CustomData *data) { gint i; GstTagList *tags; gchar *str; guint rate; /* Read some properties */ g_object_get (data->playbin, "n-video", &data->n_video, NULL); g_object_get (data->playbin, "n-audio", &data->n_audio, NULL); g_object_get (data->playbin, "n-text", &data->n_text, NULL); g_print ("%d video stream(s), %d audio stream(s), %d text stream(s)\n", data->n_video, data->n_audio, data->n_text); g_print ("\n"); for (i = 0; i < data->n_video; i++) { tags = NULL; /* Retrieve the stream's video tags */ g_signal_emit_by_name (data->playbin, "get-video-tags", i, &tags); if (tags) { g_print ("video stream %d:\n", i); gst_tag_list_get_string (tags, GST_TAG_VIDEO_CODEC, &str); g_print (" codec: %s\n", str ? str : "unknown"); g_free (str); gst_tag_list_free (tags); } } g_print ("\n"); for (i = 0; i < data->n_audio; i++) { tags = NULL; /* Retrieve the stream's audio tags */ g_signal_emit_by_name (data->playbin, "get-audio-tags", i, &tags); if (tags) { g_print ("audio stream %d:\n", i); if (gst_tag_list_get_string (tags, GST_TAG_AUDIO_CODEC, &str)) { g_print (" codec: %s\n", str); g_free (str); } if (gst_tag_list_get_string (tags, GST_TAG_LANGUAGE_CODE, &str)) { g_print (" language: %s\n", str); g_free (str); } if (gst_tag_list_get_uint (tags, GST_TAG_BITRATE, &rate)) { g_print (" bitrate: %d\n", rate); } gst_tag_list_free (tags); } } g_print ("\n"); for (i = 0; i < data->n_text; i++) { tags = NULL; /* Retrieve the stream's subtitle tags */ g_signal_emit_by_name (data->playbin, "get-text-tags", i, &tags); if (tags) { g_print ("subtitle stream %d:\n", i); if (gst_tag_list_get_string (tags, GST_TAG_LANGUAGE_CODE, &str)) { g_print (" language: %s\n", str); g_free (str); } gst_tag_list_free (tags); } } g_object_get (data->playbin, "current-video", &data->current_video, NULL); g_object_get (data->playbin, "current-audio", &data->current_audio, NULL); g_object_get (data->playbin, "current-text", &data->current_text, NULL); g_print ("\n"); g_print ("Currently playing video stream %d, audio stream %d and text stream %d\n", data->current_video, data->current_audio, data->current_text); g_print ("Type any number and hit ENTER to select a different audio stream\n"); } /* Process messages from GStreamer */ static gboolean handle_message (GstBus *bus, GstMessage *msg, CustomData *data) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); g_main_loop_quit (data->main_loop); break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); g_main_loop_quit (data->main_loop); break; case GST_MESSAGE_STATE_CHANGED: { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); if (GST_MESSAGE_SRC (msg) == GST_OBJECT (data->playbin)) { if (new_state == GST_STATE_PLAYING) { /* Once we are in the playing state, analyze the streams */ analyze_streams (data); } } } break; } /* We want to keep receiving messages */ return TRUE; } /* Process keyboard input */ static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data) { gchar *str = NULL; if (g_io_channel_read_line (source, &str, NULL, NULL, NULL) == G_IO_STATUS_NORMAL) { int index = g_ascii_strtoull (str, NULL, 0); if (index < 0 || index >= data->n_audio) { g_printerr ("Index out of bounds\n"); } else { /* If the input was a valid audio stream index, set the current audio stream */ g_print ("Setting current audio stream to %d\n", index); g_object_set (data->playbin, "current-audio", index, NULL); } } g_free (str); return TRUE; } ``` ```shell gcc playback-tutorial-1.c -o playback-tutorial-1 `pkg-config --cflags --libs gstreamer-1.0` ``` 这个例程将打开一个窗口并播放一个含有音频的电影。这段媒体是从互联网获取的,所以窗口可能需要一定的时间才会出现,这取决于你的网络连接速度。这段媒体含有的音频流数量将在终端上打印,用户能够通过输入一个数字并按下enter按键,从一个音频流切换到另一个音频流。当然,切换会有一定的延迟。 请牢记这里没有延迟管理(缓冲),因此如果连接速度较慢,电影可能会在几秒钟后停止。可以阅读[Basic Tutorial 12: Streaming]()来解决这个问题。 ## Walkthrough ```c++ /* Structure to contain all our information, so we can pass it around */ typedef struct _CustomData { GstElement *playbin; /* Our one and only element */ gint n_video; /* Number of embedded video streams */ gint n_audio; /* Number of embedded audio streams */ gint n_text; /* Number of embedded subtitle streams */ gint current_video; /* Currently playing video stream */ gint current_audio; /* Currently playing audio stream */ gint current_text; /* Currently playing subtitle stream */ GMainLoop *main_loop; /* GLib's Main Loop */ } CustomData; ``` 如往常一样,我们将所有我们需要的变量放入一个结构体中,以便我们能够在函数之间传递它们。在这篇教程中我们需要知道每种流的数量和当前在播放的流。并且我们将使用一种不同的机制来等待允许交互的消息,所以我们需要使用GLib的`main loop`对象。 ```c++ /* playbin flags */ typedef enum { GST_PLAY_FLAG_VIDEO = (1 << 0), /* We want video output */ GST_PLAY_FLAG_AUDIO = (1 << 1), /* We want audio output */ GST_PLAY_FLAG_TEXT = (1 << 2) /* We want subtitle output */ } GstPlayFlags; ``` 之后我们将设置一些`playbin`的运行标记。我们希望有一个方便的枚举,允许轻松操纵这些标志,但由于playbin是一个插件,而不是GStreamer核心的一部分,因此我们无法使用此枚举。技巧就是在我们的代码中简单的声明一个这样的枚举,像`playbin`的参考文档中的`GstPlayFlags`一样。GObject允许内省,因此这些flags能够在运行时被提取出来而不需要通过这类技巧,而是以一种更加笨重的方式。 ```c++ /* Forward definition for the message and keyboard processing functions */ static gboolean handle_message (GstBus *bus, GstMessage *msg, CustomData *data); static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data); ``` 预声明两个我们将用到的回调函数。`handle_message`用于处理GStreamer message,之前的教程中已经交接过了。`handle_keyboard`用于处理键盘敲击事件,因为这篇教程提供了有限的交互方式。 我们跳过创建pipeline的部分,它仅仅创建了`playbin`插件并将它的`uri`属性设置为我们的测试媒体。`playbin`这个插件自身就是一个pipeline,并且在这篇教程中它就是pipeline中唯一的element,所以我们跳过pipeline的完整创建,直接使用`playbin`即可。 ```c++ /* Set flags to show Audio and Video but ignore Subtitles */ g_object_get (data.playbin, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_VIDEO | GST_PLAY_FLAG_AUDIO; flags &= ~GST_PLAY_FLAG_TEXT; g_object_set (data.playbin, "flags", flags, NULL); ``` `playbin`的行为可以通过改变它的`flags`属性来改变。`flags`可以是`GstPlayFlags`的任意逻辑运算组合。最常用的值如下: | Flag | Description | | :------------------------ | :----------------------------------------------------------- | | GST_PLAY_FLAG_VIDEO | Enable video rendering. If this flag is not set, there will be no video output. | | GST_PLAY_FLAG_AUDIO | Enable audio rendering. If this flag is not set, there will be no audio output. | | GST_PLAY_FLAG_TEXT | Enable subtitle rendering. If this flag is not set, subtitles will not be shown in the video output. | | GST_PLAY_FLAG_VIS | Enable rendering of visualisations when there is no video stream. Playback tutorial 6: Audio visualization goes into more details. | | GST_PLAY_FLAG_DOWNLOAD | See Basic tutorial 12: Streaming and Playback tutorial 4: Progressive streaming. | | GST_PLAY_FLAG_BUFFERING | See Basic tutorial 12: Streaming and Playback tutorial 4: Progressive streaming. | | GST_PLAY_FLAG_DEINTERLACE | If the video content was interlaced, this flag instructs playbin to deinterlace it before displaying it. | 在这篇教程中,出于演示目的,我们使能音频和视频,但是禁用字幕,其他值保留默认值(这也是为什么我们在用`g_object_set`覆写`falgs`值之前先使用`g_object_get`获取原有值)。 ```c++ /* Set connection speed. This will affect some internal decisions of playbin */ g_object_set (data.playbin, "connection-speed", 56, NULL); ``` 这个属性在这个例子中并没有太大的作用。`connection-speed`告知`playbin`当前网络连接的最大速度,因此,如果服务器中有多个版本的请求媒体可用,`playbin`会选择最合适的版本。这主要与`hls`或`rtsp`等流协议结合使用。 ```c++ g_object_set (data.playbin, "uri", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_cropped_multilingual.webm", "flags", flags, "connection-speed", 56, NULL); ``` 我们可以仅使用一次`g_object_set()`设置所有的属性,这也是为什么这个接口需要以`NULL`作为最后一个参数。 ```c++ /* Add a keyboard watch so we get notified of keystrokes */ #ifdef _WIN32 io_stdin = g_io_channel_win32_new_fd (fileno (stdin)); #else io_stdin = g_io_channel_unix_new (fileno (stdin)); #endif g_io_add_watch (io_stdin, G_IO_IN, (GIOFunc)handle_keyboard, &data); ``` 这几行将一个回调函数与标准输入(键盘)连接起来。这里展示的这个机制由GLib实现,因此与GStreamer无关,所以这里不会深入讨论它。应用程序通常有自己的处理用户输入的手段,并且GStreamer不会做过多的干涉除了在[Tutorial 17: DVD playback]中简单讨论的导航接口。 ```c++ /* Create a GLib Main Loop and set it to run */ data.main_loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (data.main_loop); ``` 为了允许交互,我们不再手动论询GStreamer bus。取而代之的,我们创建一个`GMainLoop`并通过`g_main_loop_run()`将其设置为运行状态。这个函数将锁住线程直到`g_main_loop_quit()`被调用。同时它将在适当的时间调用我们之前注册的两个回调函数:当bus上出现消息的时候调用`handle_message`;当用户按下按键的时候调用`handle_keyboard`。 `handle_message`没有新内容,除了当pipeline切换到`PLAYING`状态时,它将调用`analyze_streams`函数: ```c++ /* Extract some metadata from the streams and print it on the screen */ static void analyze_streams (CustomData *data) { gint i; GstTagList *tags; gchar *str; guint rate; /* Read some properties */ g_object_get (data->playbin, "n-video", &data->n_video, NULL); g_object_get (data->playbin, "n-audio", &data->n_audio, NULL); g_object_get (data->playbin, "n-text", &data->n_text, NULL); ``` 如注释所言,这个函数仅收集媒体的信息并将它打印在屏幕上。视频流,音频流和字母流的数量可以直接通过`n-video`,`n-audio`和`n-text`属性获取到。 ```c++ for (i = 0; i < data->n_video; i++) { tags = NULL; /* Retrieve the stream's video tags */ g_signal_emit_by_name (data->playbin, "get-video-tags", i, &tags); if (tags) { g_print ("video stream %d:\n", i); gst_tag_list_get_string (tags, GST_TAG_VIDEO_CODEC, &str); g_print (" codec: %s\n", str ? str : "unknown"); g_free (str); gst_tag_list_free (tags); } } ``` 现在,对于每个流,我们想到提取出它的metadata。metadata作为标签存储在一个`GstTagList`中,这是一个以名字区分的数据片段列表。一个流的`GstTagList`可以通过`g_signal_emit_by_name()`还原,并且每个单独的标签可以使用`gst_tag_list_get_*`提取出来,例如`gst_tag_list_get_string()`。 注:这种相当不直观的检索标记列表的方法被称作“Action Signal“。Action signals由应用程序向特定的element发送,element将执行一个动作并返回结果。它们的行为类似于动态函数调用,即方法以信号名称而不是内存地址来标识。Action signals列表可以在插件的文档中找到。 `playbin`定义了三个action signals用于检索metadata:`get-video-tags`, `get-audio-tags` 和 `get-text-tags`。如果标签是标准化的,那么名称和列表可以在`GstTagList`文档中找到。在这个例子中,我们感兴趣的是流的语言和各个流的编解码信息。 ```c++ g_object_get (data->playbin, "current-video", &data->current_video, NULL); g_object_get (data->playbin, "current-audio", &data->current_audio, NULL); g_object_get (data->playbin, "current-text", &data->current_text, NULL); ``` 一旦我们提取到了所有我们需要的metadata,我们使用另外3个属性获取`playbin`当前选中的流:`current-video`,`current-audio` 和 `current-text`。 值得注意的是我们应该总是使用接口检查当前选中的流而不是依赖于假设。因为多个内部条件可以使playbin在不同的执行中表现不同。此外,列出的流的顺序可能每次运行都不一样,因此检查元数据以识别一个特定流变得至关重要。 ```c++ /* Process keyboard input */ static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data) { gchar *str = NULL; if (g_io_channel_read_line (source, &str, NULL, NULL, NULL) == G_IO_STATUS_NORMAL) { int index = g_ascii_strtoull (str, NULL, 0); if (index < 0 || index >= data->n_audio) { g_printerr ("Index out of bounds\n"); } else { /* If the input was a valid audio stream index, set the current audio stream */ g_print ("Setting current audio stream to %d\n", index); g_object_set (data->playbin, "current-audio", index, NULL); } } g_free (str); return TRUE; } ``` 最后,我们我们允许用户切换正在播放的音频流。这个非常基础的函数从标准输入(键盘)读取一个字符串,被转义为一个数字,并尝试设置`playbin`的`current-audio`属性。 切记这种切换不是立即生效的。一些之前解码好的音频数据将仍然在pipeline中流动,虽然新的流已经开始解码。延迟取决于容器中流的特定多路复用和`playbin`的内部queue的长度(这取决于网络状况)。 如果你运行这个例程,你将能够在播放电影的同时从一种语言切换到另外一个语言,通过按下0,1或2(再按Enter)。 ## Conclusion 这篇教程展示了: - `playbin`的更多属性:`flags`, `connection-speed`, `n-video`, `n-audio`, `n-text`, `current-video`,`current-audio` 和 `current-text`。 - 如何通过`g_signal_emit_by_name()`检索一个流的标签列表。 - 如何通过`gst_tag_list_get_string()`或 `gst_tag_list_get_uint()`检索特定的tag。 - 如何通过简单的修改`current-audio`属性来切换当前播放的音频流。 下一篇播放教程将展示如何处理字幕,包括内嵌字幕和外挂字幕。 ================================================ FILE: basic_theory/playback/playbin_sink.md ================================================ # Playback tutorial 7: Custom playbin sinks ## Goal `playbin`可以通过手动选择其音频和视频sink进行进一步定制。这允许应用程序仅依赖`playbin`提取和解码媒体数据然后自行管理数据的渲染/演示。这篇教程展示了: - 如何替换`playbin`的sink。 - 如何使用一条复杂的pipeline作为sink。 ## Introduction `playbin`的两个属性允许用户选择自己想要的audio和video sinks:`audio-sink`和`video-sink`。应用程序仅需要初始化适当的`GstElement`并将其传递给`playbin`的这两个属性。 然而这个属性仅允许使用单个element作为sink。如果需要使用更加复杂的pipeline,例如一个均衡器加上一个audio sink,它们需要被包裹在一个bin中,这样对于`playbin`来说,这个bin看起来就像一个独立的element。 一个Bin(`GstBin`)是一个封装了部分pipeline的容器,通过bin这部分pipeline元素能够以一个独立的element的形式管理。例如,我们在所有教程中使用的`GstPipeline`实际是就是一个`GstBin`,只是它不再与其他外部element交互,即`GstPipeline`是最上(外)层的`GstBin`。Bin中的elements通过一个`GstGhostPad`与外部elements连接,`GstGhostPad`是一个仅将数据简单的从外部pad传递给指定的内部pad的接口。 ![img](images/bin-element-ghost.png) `GstBin`也是一个广义上的`GstElement`,因此请求element的地方也能够使用`GstBin`。 ## An equalized player ```c++ #include int main(int argc, char *argv[]) { GstElement *pipeline, *bin, *equalizer, *convert, *sink; GstPad *pad, *ghost_pad; GstBus *bus; GstMessage *msg; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Build the pipeline */ pipeline = gst_parse_launch ("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); /* Create the elements inside the sink bin */ equalizer = gst_element_factory_make ("equalizer-3bands", "equalizer"); convert = gst_element_factory_make ("audioconvert", "convert"); sink = gst_element_factory_make ("autoaudiosink", "audio_sink"); if (!equalizer || !convert || !sink) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Create the sink bin, add the elements and link them */ bin = gst_bin_new ("audio_sink_bin"); gst_bin_add_many (GST_BIN (bin), equalizer, convert, sink, NULL); gst_element_link_many (equalizer, convert, sink, NULL); pad = gst_element_get_static_pad (equalizer, "sink"); ghost_pad = gst_ghost_pad_new ("sink", pad); gst_pad_set_active (ghost_pad, TRUE); gst_element_add_pad (bin, ghost_pad); gst_object_unref (pad); /* Configure the equalizer */ g_object_set (G_OBJECT (equalizer), "band1", (gdouble)-24.0, NULL); g_object_set (G_OBJECT (equalizer), "band2", (gdouble)-24.0, NULL); /* Set playbin's audio sink to be our sink bin */ g_object_set (GST_OBJECT (pipeline), "audio-sink", bin, NULL); /* Start playing */ gst_element_set_state (pipeline, GST_STATE_PLAYING); /* Wait until error or EOS */ bus = gst_element_get_bus (pipeline); msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS); /* Free resources */ if (msg != NULL) gst_message_unref (msg); gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); return 0; } ``` 这个例程将打开一个窗口并播放一个含有音频的电影。这段媒体是从互联网获取的,所以窗口可能需要一定的时间才会出现,这取决于你的网络连接速度。由于音频的高频部分被衰减,因此电影声音的低音部分将更清晰。 ## Walkthrough ```c++ /* Create the elements inside the sink bin */ equalizer = gst_element_factory_make ("equalizer-3bands", "equalizer"); convert = gst_element_factory_make ("audioconvert", "convert"); sink = gst_element_factory_make ("autoaudiosink", "audio_sink"); if (!equalizer || !convert || !sink) { g_printerr ("Not all elements could be created.\n"); return -1; } ``` 上述代码实例化了所有组成sink-bin的elements。我们使用了一个`equalizer-3bands`和一个`autoaudiosink`插件,这两个插件中间插入了一个`audioconvert`,因为我们并不确定audio sink会需要什么样的capabilities(因为audio sink可能有硬件依赖)。 ```c++ /* Create the sink bin, add the elements and link them */ bin = gst_bin_new ("audio_sink_bin"); gst_bin_add_many (GST_BIN (bin), equalizer, convert, sink, NULL); gst_element_link_many (equalizer, convert, sink, NULL); ``` 将所有的新建的elements加到一个bin中,并像链接pipeline elements一样连接他们。 ```c++ pad = gst_element_get_static_pad (equalizer, "sink"); ghost_pad = gst_ghost_pad_new ("sink", pad); gst_pad_set_active (ghost_pad, TRUE); gst_element_add_pad (bin, ghost_pad); gst_object_unref (pad); ``` 现在我们需要为这个bin创建一个`GstGhostPad`,从而让这个bin能够与外部连接。这个Ghost Pad将与bin中的`equalizer-3bands`的sink-pad连接,这个sink-pad可以使用`gst_element_get_static_pad()`获取到。 Ghost Pad可以使用`gst_ghost_pad_new()`创建(它将指向我们指定的GstBin内部的pad),并使用`gst_pad_set_active()`使其生效。然后使用`gst_element_add_pad()`将其加入bin中,将Ghost Pad的控制权转移给bin,然后我们不需要关系它的释放。 到此为止,我们拥有了一个功能性的sink-bin,我们能像使用audio sink一样使用它。我们只需要将其与`playbin`连接: ```c++ /* Set playbin's audio sink to be our sink bin */ g_object_set (GST_OBJECT (pipeline), "audio-sink", bin, NULL); ``` 只需将`playbin`上的`audio-sink`属性设置为新创建的sink bin即可。 ```c++ /* Configure the equalizer */ g_object_set (G_OBJECT (equalizer), "band1", (gdouble)-24.0, NULL); g_object_set (G_OBJECT (equalizer), "band2", (gdouble)-24.0, NULL); ``` 最后修改了均衡器的设置,这部分是音频的处理,与教程无关,不做过多赘述。 ## Conclusion 这篇教程展示了: - 如何通过`audio-sink`和`video-sink`属性指定`playbin`的sink element。 - 如何将多个elements包裹为一个`GstBin`,使其能够像使用单个element一样被`playbin`使用。 ================================================ FILE: basic_theory/playback/progressive_stream.md ================================================ # Playback tutorial 4: Progressive streaming ## Goal Basic tutorial 12: Streaming展示了如何在糟糕的网络情况下提高用户体验,通过使用缓冲机制。这篇教程是Basic tutorial 12: Streaming的进一步拓展——启用流媒体的本地存储,并描述了这种技术的优点。其中,主要展示了: - 如何启用渐进式下载。 - 如何知道已下载的内容。 - 如何知道已下载内容的位置。 - 如何限制保存的下载数据的数量。 ## Introduction 当流启动,将从互联网获取数据,为了保证流畅的播放,保留了一小块未来数据缓冲区。然而,数据将在它被播放或渲染后立即丢弃(程序中不会存在过去的数据缓冲)。这意味着,假如用户想要从过去的某个时刻开始回放,数据需要重新下载。 为流媒体量身定制的媒体播放器,例如Youtube,通常将所有下载的数据存储在本地,以防意外情况。通常会使用一个图形窗口来展示当前文件的下载进度。`playbin`通过`DOWNLOAD`标记提供了类似的功能,为了更快的播放已下载的数据,`playbin`能够将媒体保存到一个本地临时文件中。 本教程同时展示了如何使用Buffer Query,它允许知道文件的哪些部分可用。 ## A network-resilient example with local storage ```c++ #include #include #define GRAPH_LENGTH 78 /* playbin flags */ typedef enum { GST_PLAY_FLAG_DOWNLOAD = (1 << 7) /* Enable progressive download (on selected formats) */ } GstPlayFlags; typedef struct _CustomData { gboolean is_live; GstElement *pipeline; GMainLoop *loop; gint buffering_level; } CustomData; static void got_location (GstObject *gstobject, GstObject *prop_object, GParamSpec *prop, gpointer data) { gchar *location; g_object_get (G_OBJECT (prop_object), "temp-location", &location, NULL); g_print ("Temporary file: %s\n", location); g_free (location); /* Uncomment this line to keep the temporary file after the program exits */ /* g_object_set (G_OBJECT (prop_object), "temp-remove", FALSE, NULL); */ } static void cb_message (GstBus *bus, GstMessage *msg, CustomData *data) { switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: { GError *err; gchar *debug; gst_message_parse_error (msg, &err, &debug); g_print ("Error: %s\n", err->message); g_error_free (err); g_free (debug); gst_element_set_state (data->pipeline, GST_STATE_READY); g_main_loop_quit (data->loop); break; } case GST_MESSAGE_EOS: /* end-of-stream */ gst_element_set_state (data->pipeline, GST_STATE_READY); g_main_loop_quit (data->loop); break; case GST_MESSAGE_BUFFERING: /* If the stream is live, we do not care about buffering. */ if (data->is_live) break; gst_message_parse_buffering (msg, &data->buffering_level); /* Wait until buffering is complete before start/resume playing */ if (data->buffering_level < 100) gst_element_set_state (data->pipeline, GST_STATE_PAUSED); else gst_element_set_state (data->pipeline, GST_STATE_PLAYING); break; case GST_MESSAGE_CLOCK_LOST: /* Get a new clock */ gst_element_set_state (data->pipeline, GST_STATE_PAUSED); gst_element_set_state (data->pipeline, GST_STATE_PLAYING); break; default: /* Unhandled message */ break; } } static gboolean refresh_ui (CustomData *data) { GstQuery *query; gboolean result; query = gst_query_new_buffering (GST_FORMAT_PERCENT); result = gst_element_query (data->pipeline, query); if (result) { gint n_ranges, range, i; gchar graph[GRAPH_LENGTH + 1]; gint64 position = 0, duration = 0; memset (graph, ' ', GRAPH_LENGTH); graph[GRAPH_LENGTH] = '\0'; n_ranges = gst_query_get_n_buffering_ranges (query); for (range = 0; range < n_ranges; range++) { gint64 start, stop; gst_query_parse_nth_buffering_range (query, range, &start, &stop); start = start * GRAPH_LENGTH / (stop - start); stop = stop * GRAPH_LENGTH / (stop - start); for (i = (gint)start; i < stop; i++) graph [i] = '-'; } if (gst_element_query_position (data->pipeline, GST_FORMAT_TIME, &position) && GST_CLOCK_TIME_IS_VALID (position) && gst_element_query_duration (data->pipeline, GST_FORMAT_TIME, &duration) && GST_CLOCK_TIME_IS_VALID (duration)) { i = (gint)(GRAPH_LENGTH * (double)position / (double)(duration + 1)); graph [i] = data->buffering_level < 100 ? 'X' : '>'; } g_print ("[%s]", graph); if (data->buffering_level < 100) { g_print (" Buffering: %3d%%", data->buffering_level); } else { g_print (" "); } g_print ("\r"); } return TRUE; } int main(int argc, char *argv[]) { GstElement *pipeline; GstBus *bus; GstStateChangeReturn ret; GMainLoop *main_loop; CustomData data; guint flags; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Initialize our data structure */ memset (&data, 0, sizeof (data)); data.buffering_level = 100; /* Build the pipeline */ pipeline = gst_parse_launch ("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm", NULL); bus = gst_element_get_bus (pipeline); /* Set the download flag */ g_object_get (pipeline, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_DOWNLOAD; g_object_set (pipeline, "flags", flags, NULL); /* Uncomment this line to limit the amount of downloaded data */ /* g_object_set (pipeline, "ring-buffer-max-size", (guint64)4000000, NULL); */ /* Start playing */ ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (pipeline); return -1; } else if (ret == GST_STATE_CHANGE_NO_PREROLL) { data.is_live = TRUE; } main_loop = g_main_loop_new (NULL, FALSE); data.loop = main_loop; data.pipeline = pipeline; gst_bus_add_signal_watch (bus); g_signal_connect (bus, "message", G_CALLBACK (cb_message), &data); g_signal_connect (pipeline, "deep-notify::temp-location", G_CALLBACK (got_location), NULL); /* Register a function that GLib will call every second */ g_timeout_add_seconds (1, (GSourceFunc)refresh_ui, &data); g_main_loop_run (main_loop); /* Free resources */ g_main_loop_unref (main_loop); gst_object_unref (bus); gst_element_set_state (pipeline, GST_STATE_NULL); gst_object_unref (pipeline); g_print ("\n"); return 0; } ``` 这个例程将打开一个窗口并播放一个含有音频的电影。这段媒体是从互联网获取的,所以窗口可能需要一定的时间才会出现,这取决于你的网络连接速度。在控制台窗口你将看到一条表明媒体存储位置的信息,以及一个文本格式的图形代表了下载进度和当前播放进度。一条缓冲消息将在请求缓冲时打印,但当你的网速足够快时这条消息不会出现。 ## Walkthrough ### Setup ```c++ /* Set the download flag */ g_object_get (pipeline, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_DOWNLOAD; g_object_set (pipeline, "flags", flags, NULL); ``` 通过设置这个flag,`playbin`告诉它内部的queue(实际上是一个`queue2`元素)来存储所有下载的数据。 ```c++ g_signal_connect (pipeline, "deep-notify::temp-location", G_CALLBACK (got_location), NULL); ``` 当`GstObject`元素(例如`playbin`)任何子元素的属性改变时都将发出`deep-notify`信号。在这个例子中,我们关注`deep-notify::temp-location`的变化,它将指明`queue2`决定存储下载的数据的位置。 ```c++ static void got_location (GstObject *gstobject, GstObject *prop_object, GParamSpec *prop, gpointer data) { gchar *location; g_object_get (G_OBJECT (prop_object), "temp-location", &location, NULL); g_print ("Temporary file: %s\n", location); g_free (location); /* Uncomment this line to keep the temporary file after the program exits */ /* g_object_set (G_OBJECT (prop_object), "temp-remove", FALSE, NULL); */ } ``` `got_location`将读取`queue2`中的`temp-location`属性并将其打印到屏幕上。 当pipeline的状态从`PAUSE`切换到`READY`时,这个临时文件将被删除。正如注释语句所言,你可以将`queue2`的`temp-remove`属性设为`FALSE`以禁用这一设置。 ## User Interface 在main函数中,我们安装了一个定时器,用于每秒刷新一次UI。 ```c++ /* Register a function that GLib will call every second */g_timeout_add_seconds (1, (GSourceFunc)refresh_ui, &data); ``` `refresh_ui`方法询问pipeline来获取当前已下载的文件部分以及当前正在播放的位置。它构建了一个文本图像来显示这个信息并将其打印到屏幕上,每次被调用都覆盖上一次的输出,使得其看起来像动画。 ```c++ [---->------- ] ``` 破折号`-`标识已下载的部分,大于号`>`表示当前播放的位置(当暂停播放时将变为`X`)。但请记住假如你的网速足够快,你将看不到下载进度条(破折号)的加载;它将在刚开始就完成下载。 ```c++ static gboolean refresh_ui (CustomData *data) { GstQuery *query; gboolean result; query = gst_query_new_buffering (GST_FORMAT_PERCENT); result = gst_element_query (data->pipeline, query); ``` 我们在`refresh_ui`中做的第一件事就是使用`gst_query_new_buffering`构造了一个新的`GstQuery`缓冲查询对象并使用`gst_element_query`将其传递给`playbin`。在Basic tutorial 4: Time management中,我们一直知道如何使用特定的方法来完成像Position和Duration这样简单的查询。类似于缓冲区这种更加复杂的查询,需要使用更通常的`gst_element_query`接口。 关于缓冲区的查询可以是各种`GstFormat`格式的,包含TIME,BYTES,PERCENTAGE等等。但不是所有的elements都能响应所有格式的查询,因此你需要检车你的pipeline支持哪些格式。假如`gst_element_query`返回`TRUE`,这代表查询成功。查询的结果别保存在传入的`GstQuery`结构体中,然后我们可以使用多种解析方法提取出想要的信息: ```c++ n_ranges = gst_query_get_n_buffering_ranges (query); for (range = 0; range < n_ranges; range++) { gint64 start, stop; gst_query_parse_nth_buffering_range (query, range, &start, &stop); start = start * GRAPH_LENGTH / (stop - start); stop = stop * GRAPH_LENGTH / (stop - start); for (i = (gint)start; i < stop; i++) graph [i] = '-'; } ``` 数据不需要从文件的开头连续下载:例如,搜索(跳播)可能会迫使用户从一个新的位置开始下载,并留下已下载的数据块。因此`gst_query_get_n_buffering_ranges`将返回文件块的数目或者是已下载的数据的范围,然后我们可以使用`gst_query_parse_nth_buffering_range`来获取每个范围的位置和大小。 查询的返回值的类型将取决于`gst_query_new_buffering`查询什么格式的信息,在这个例子中是缓冲进度的百分比。这些只将用于生成表示下载进度的文本图像。 ```c++ if (gst_element_query_position (data->pipeline, GST_FORMAT_TIME, &position) && GST_CLOCK_TIME_IS_VALID (position) && gst_element_query_duration (data->pipeline, GST_FORMAT_TIME, &duration) && GST_CLOCK_TIME_IS_VALID (duration)) { i = (gint)(GRAPH_LENGTH * (double)position / (double)(duration + 1)); graph [i] = data->buffering_level < 100 ? 'X' : '>'; } ``` 接下来将查询当前播放的位置。它可以以PERCENT的格式查询,因此代码将和查询范围差不多,但目前对于当前播放位置的PERCENT格式的查询支持还不完善,因此我们使用TIME格式代替,并且查询持续时间来计算百分比。 当前播放的位置使用一个`>`或者`X`来表示,这取决于缓冲级别。当缓冲级别低于100%时,`cb_message`将把pipeline的状态设置为`PAUSE`,于是打印的是`X`。当缓冲级别为100%时,`cb_message`将把pipeline的状态设置为`PLAYING`,打印`>`。 注:由于开发板的网络原因,我并没能将例程运行起来,因此文档中关于进度的解释我其实并没有弄明白,尤其是上面绘制进度的start和stop的计算。根据我的理解:pipeline会等待整个媒体文件缓冲完成才会开始播放,在缓冲完成之前其实打印的都是`X`,例程也并没有支持动态的切换pipeline的状态,因此这里的播放和暂停与实际播放器能够完成的动态交互不太一样。 ```c++ if (data->buffering_level < 100) { g_print (" Buffering: %3d%%", data->buffering_level); } else { g_print (" "); } ``` 最后,假如缓冲级别低于100%,我们将打印这个信息。 ### Limiting the size of the downloaded file ```c++ /* Uncomment this line to limit the amount of downloaded data */ /* g_object_set (pipeline, "ring-buffer-max-size", (guint64)4000000, NULL); */ ``` 这减少了临时文件的大小,通过覆盖已经播放的区域。观察下载栏,可以看出文件中保持哪些区域可用。 ## Conclusion 这篇教程展示了: - `playbin`如何通过`GST_PLAY_FLAG_DOWNLOAD`标识实现平滑的下载。 - 如何通过缓冲区查询`GstQuery`已下载的内容。 - 如何通过`deep-notify::temp-location`获取下载文件的存储位置。 - 如何通过`ring-buffer-max-size`限制`playbin`下载的临时文件的大小。 ================================================ FILE: basic_theory/playback/shortcut_pipeline.md ================================================ # Playback tutorial 3: Short-cutting the pipeline ## Goal 在[Basic tutorial 8: Short-cutting the pipeline]()中展示了一个应用程序如何通过`appsink`和`appsrc`插件手动地从pipeline中提取和插入数据。`playbin`同样允许使用这两个插件,但是连接的方式不一样。要将`playbin`与`appsink`连接请阅读[Playback tutorial 7: Custom playbin sinks]()。这篇教程展示了: - 如何将`appsrc`与`playbin`连接。 - 如何配置appsrc。 ## A playbin waveform generator ```c++ #include #include #include #define CHUNK_SIZE 1024 /* Amount of bytes we are sending in each buffer */ #define SAMPLE_RATE 44100 /* Samples per second we are sending */ /* Structure to contain all our information, so we can pass it to callbacks */ typedef struct _CustomData { GstElement *pipeline; GstElement *app_source; guint64 num_samples; /* Number of samples generated so far (for timestamp generation) */ gfloat a, b, c, d; /* For waveform generation */ guint sourceid; /* To control the GSource */ GMainLoop *main_loop; /* GLib's Main Loop */ } CustomData; /* This method is called by the idle GSource in the mainloop, to feed CHUNK_SIZE bytes into appsrc. * The ide handler is added to the mainloop when appsrc requests us to start sending data (need-data signal) * and is removed when appsrc has enough data (enough-data signal). */ static gboolean push_data (CustomData *data) { GstBuffer *buffer; GstFlowReturn ret; int i; GstMapInfo map; gint16 *raw; gint num_samples = CHUNK_SIZE / 2; /* Because each sample is 16 bits */ gfloat freq; /* Create a new empty buffer */ buffer = gst_buffer_new_and_alloc (CHUNK_SIZE); /* Set its timestamp and duration */ GST_BUFFER_TIMESTAMP (buffer) = gst_util_uint64_scale (data->num_samples, GST_SECOND, SAMPLE_RATE); GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale (num_samples, GST_SECOND, SAMPLE_RATE); /* Generate some psychodelic waveforms */ gst_buffer_map (buffer, &map, GST_MAP_WRITE); raw = (gint16 *)map.data; data->c += data->d; data->d -= data->c / 1000; freq = 1100 + 1000 * data->d; for (i = 0; i < num_samples; i++) { data->a += data->b; data->b -= data->a / freq; raw[i] = (gint16)(500 * data->a); } gst_buffer_unmap (buffer, &map); data->num_samples += num_samples; /* Push the buffer into the appsrc */ g_signal_emit_by_name (data->app_source, "push-buffer", buffer, &ret); /* Free the buffer now that we are done with it */ gst_buffer_unref (buffer); if (ret != GST_FLOW_OK) { /* We got some error, stop sending data */ return FALSE; } return TRUE; } /* This signal callback triggers when appsrc needs data. Here, we add an idle handler * to the mainloop to start pushing data into the appsrc */ static void start_feed (GstElement *source, guint size, CustomData *data) { if (data->sourceid == 0) { g_print ("Start feeding\n"); data->sourceid = g_idle_add ((GSourceFunc) push_data, data); } } /* This callback triggers when appsrc has enough data and we can stop sending. * We remove the idle handler from the mainloop */ static void stop_feed (GstElement *source, CustomData *data) { if (data->sourceid != 0) { g_print ("Stop feeding\n"); g_source_remove (data->sourceid); data->sourceid = 0; } } /* This function is called when an error message is posted on the bus */ static void error_cb (GstBus *bus, GstMessage *msg, CustomData *data) { GError *err; gchar *debug_info; /* Print error details on the screen */ gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); g_main_loop_quit (data->main_loop); } /* This function is called when playbin has created the appsrc element, so we have * a chance to configure it. */ static void source_setup (GstElement *pipeline, GstElement *source, CustomData *data) { GstAudioInfo info; GstCaps *audio_caps; g_print ("Source has been created. Configuring.\n"); data->app_source = source; /* Configure appsrc */ gst_audio_info_set_format (&info, GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL); audio_caps = gst_audio_info_to_caps (&info); g_object_set (source, "caps", audio_caps, "format", GST_FORMAT_TIME, NULL); g_signal_connect (source, "need-data", G_CALLBACK (start_feed), data); g_signal_connect (source, "enough-data", G_CALLBACK (stop_feed), data); gst_caps_unref (audio_caps); } int main(int argc, char *argv[]) { CustomData data; GstBus *bus; /* Initialize cumstom data structure */ memset (&data, 0, sizeof (data)); data.b = 1; /* For waveform generation */ data.d = 1; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the playbin element */ data.pipeline = gst_parse_launch ("playbin uri=appsrc://", NULL); g_signal_connect (data.pipeline, "source-setup", G_CALLBACK (source_setup), &data); /* Instruct the bus to emit signals for each received message, and connect to the interesting signals */ bus = gst_element_get_bus (data.pipeline); gst_bus_add_signal_watch (bus); g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, &data); gst_object_unref (bus); /* Start playing the pipeline */ gst_element_set_state (data.pipeline, GST_STATE_PLAYING); /* Create a GLib Main Loop and set it to run */ data.main_loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (data.main_loop); /* Free resources */ gst_element_set_state (data.pipeline, GST_STATE_NULL); gst_object_unref (data.pipeline); return 0; } ``` 这个例程将打开一个窗口并播放一个含有音频的电影。这段媒体是从互联网获取的,所以窗口可能需要一定的时间才会出现,这取决于你的网络连接速度。在控制台窗口中,您应该会看到一条消息,指示媒体存储的位置,以及一个表示下载部分和当前位置的文本图。一条缓冲消息将在请求缓冲是打印,但当你的网速足够快时这条消息不会出现。 ## Walkthrough 为了使用`appsrc`作为这条pipeline的数据源,实例化一个`playbin`对象并将`uri`属性设置为`appsrc://`。 ```c++ /* Create the playbin element */ data.pipeline = gst_parse_launch ("playbin uri=appsrc://", NULL); ``` `playbin`将在内部创建一个`appsrc`元素,并且发出`source-setup`信号以通知应用程序来配置它: ```c++ g_signal_connect (data.pipeline, "source-setup", G_CALLBACK (source_setup), &data); ``` 需要注意的是,设置`appsrc`的caps是很重要的,因为一旦信号句柄(`source_setup`回调函数)返回,`playbin`将基于这个caps实例化下一个pipeline的下一个元素,假如caps没有被正确设置会影响整个pipeline的运行(一个常见的现象就是`appsrc`的`need-data`回调可能触发了一次之后就不再触发): ```c++ /* This function is called when playbin has created the appsrc element, so we have * a chance to configure it. */ static void source_setup (GstElement *pipeline, GstElement *source, CustomData *data) { GstAudioInfo info; GstCaps *audio_caps; g_print ("Source has been created. Configuring.\n"); data->app_source = source; /* Configure appsrc */ gst_audio_info_set_format (&info, GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL); audio_caps = gst_audio_info_to_caps (&info); g_object_set (source, "caps", audio_caps, "format", GST_FORMAT_TIME, NULL); g_signal_connect (source, "need-data", G_CALLBACK (start_feed), data); g_signal_connect (source, "enough-data", G_CALLBACK (stop_feed), data); gst_caps_unref (audio_caps); } ``` 这里关于`appsrc`的配置和[Basic tutorial 8: Short-cutting the pipeline]()中的完全一致:caps被设置为`audio/x-raw`,注册了两个回调函数,因此`appsrc`可以通知应用程序何时开始和停止输送数据。可以阅读[Basic tutorial 8: Short-cutting the pipeline]()以获得更多细节。 除此以外,`playbin`负责pipeline的剩余部分,应用程序只需要负责生成数据。 想知道如何使用`appsink`从`playbin`中提取数据,请阅读[Playback tutorial 7: Custom playbin sinks]()。 ## Conclusion 这篇教程在`playbin`上实现了[Basic tutorial 8: Short-cutting the pipeline]()中的操作: - 如何通过设置`playbin`的`uri`属性为`appsrc://`来连接`appsrc`。 - 如何通过`source-setup`信号配置`appsrc`。 ================================================ FILE: basic_theory/playback/subtitle.md ================================================ # Playback tutorial 2: Subtitle management ## Goal 这篇教程与上一篇非常相似,但是我们将切换字幕流而不是音频流。我们将学到: - 如何选择字幕流。 - 如何添加外挂字幕。 - 如何自定义字幕的字体。 ## Introduction 我们已经知道(通过之前的教程)容器文件可以拥有多个音视频流,并且我们可以通过修改`current-audio`和`current-video`属性从中选择要播放的流。切换字幕流也同样简单。 值得注意的是,就像音频和视频一样,`playbin`负责为字幕选择正确的解码器,并且GStreamer的插件结构允许添加对新格式的支持就像复制文件一样简单。这些细节都对应用程序开发者不可见。 除了内嵌在容器中的字幕,`playbin`还提供了从外部URI添加额外字幕流的可能性。 这篇教程打开了一个包含了5个字幕流的文件,并且通过其他文件添加了一个字幕流(瑞士语)。 ## The multilingual player with subtitles ```c++ #include #include /* Structure to contain all our information, so we can pass it around */ typedef struct _CustomData { GstElement *playbin; /* Our one and only element */ gint n_video; /* Number of embedded video streams */ gint n_audio; /* Number of embedded audio streams */ gint n_text; /* Number of embedded subtitle streams */ gint current_video; /* Currently playing video stream */ gint current_audio; /* Currently playing audio stream */ gint current_text; /* Currently playing subtitle stream */ GMainLoop *main_loop; /* GLib's Main Loop */ } CustomData; /* playbin flags */ typedef enum { GST_PLAY_FLAG_VIDEO = (1 << 0), /* We want video output */ GST_PLAY_FLAG_AUDIO = (1 << 1), /* We want audio output */ GST_PLAY_FLAG_TEXT = (1 << 2) /* We want subtitle output */ } GstPlayFlags; /* Forward definition for the message and keyboard processing functions */ static gboolean handle_message (GstBus *bus, GstMessage *msg, CustomData *data); static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data); int main(int argc, char *argv[]) { CustomData data; GstBus *bus; GstStateChangeReturn ret; gint flags; GIOChannel *io_stdin; /* Initialize GStreamer */ gst_init (&argc, &argv); /* Create the elements */ data.playbin = gst_element_factory_make ("playbin", "playbin"); if (!data.playbin) { g_printerr ("Not all elements could be created.\n"); return -1; } /* Set the URI to play */ g_object_set (data.playbin, "uri", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.ogv", NULL); /* Set the subtitle URI to play and some font description */ g_object_set (data.playbin, "suburi", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer_gr.srt", NULL); g_object_set (data.playbin, "subtitle-font-desc", "Sans, 18", NULL); /* Set flags to show Audio, Video and Subtitles */ g_object_get (data.playbin, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_VIDEO | GST_PLAY_FLAG_AUDIO | GST_PLAY_FLAG_TEXT; g_object_set (data.playbin, "flags", flags, NULL); /* Add a bus watch, so we get notified when a message arrives */ bus = gst_element_get_bus (data.playbin); gst_bus_add_watch (bus, (GstBusFunc)handle_message, &data); /* Add a keyboard watch so we get notified of keystrokes */ #ifdef G_OS_WIN32 io_stdin = g_io_channel_win32_new_fd (fileno (stdin)); #else io_stdin = g_io_channel_unix_new (fileno (stdin)); #endif g_io_add_watch (io_stdin, G_IO_IN, (GIOFunc)handle_keyboard, &data); /* Start playing */ ret = gst_element_set_state (data.playbin, GST_STATE_PLAYING); if (ret == GST_STATE_CHANGE_FAILURE) { g_printerr ("Unable to set the pipeline to the playing state.\n"); gst_object_unref (data.playbin); return -1; } /* Create a GLib Main Loop and set it to run */ data.main_loop = g_main_loop_new (NULL, FALSE); g_main_loop_run (data.main_loop); /* Free resources */ g_main_loop_unref (data.main_loop); g_io_channel_unref (io_stdin); gst_object_unref (bus); gst_element_set_state (data.playbin, GST_STATE_NULL); gst_object_unref (data.playbin); return 0; } /* Extract some metadata from the streams and print it on the screen */ static void analyze_streams (CustomData *data) { gint i; GstTagList *tags; gchar *str; guint rate; /* Read some properties */ g_object_get (data->playbin, "n-video", &data->n_video, NULL); g_object_get (data->playbin, "n-audio", &data->n_audio, NULL); g_object_get (data->playbin, "n-text", &data->n_text, NULL); g_print ("%d video stream(s), %d audio stream(s), %d text stream(s)\n", data->n_video, data->n_audio, data->n_text); g_print ("\n"); for (i = 0; i < data->n_video; i++) { tags = NULL; /* Retrieve the stream's video tags */ g_signal_emit_by_name (data->playbin, "get-video-tags", i, &tags); if (tags) { g_print ("video stream %d:\n", i); gst_tag_list_get_string (tags, GST_TAG_VIDEO_CODEC, &str); g_print (" codec: %s\n", str ? str : "unknown"); g_free (str); gst_tag_list_free (tags); } } g_print ("\n"); for (i = 0; i < data->n_audio; i++) { tags = NULL; /* Retrieve the stream's audio tags */ g_signal_emit_by_name (data->playbin, "get-audio-tags", i, &tags); if (tags) { g_print ("audio stream %d:\n", i); if (gst_tag_list_get_string (tags, GST_TAG_AUDIO_CODEC, &str)) { g_print (" codec: %s\n", str); g_free (str); } if (gst_tag_list_get_string (tags, GST_TAG_LANGUAGE_CODE, &str)) { g_print (" language: %s\n", str); g_free (str); } if (gst_tag_list_get_uint (tags, GST_TAG_BITRATE, &rate)) { g_print (" bitrate: %d\n", rate); } gst_tag_list_free (tags); } } g_print ("\n"); for (i = 0; i < data->n_text; i++) { tags = NULL; /* Retrieve the stream's subtitle tags */ g_print ("subtitle stream %d:\n", i); g_signal_emit_by_name (data->playbin, "get-text-tags", i, &tags); if (tags) { if (gst_tag_list_get_string (tags, GST_TAG_LANGUAGE_CODE, &str)) { g_print (" language: %s\n", str); g_free (str); } gst_tag_list_free (tags); } else { g_print (" no tags found\n"); } } g_object_get (data->playbin, "current-video", &data->current_video, NULL); g_object_get (data->playbin, "current-audio", &data->current_audio, NULL); g_object_get (data->playbin, "current-text", &data->current_text, NULL); g_print ("\n"); g_print ("Currently playing video stream %d, audio stream %d and subtitle stream %d\n", data->current_video, data->current_audio, data->current_text); g_print ("Type any number and hit ENTER to select a different subtitle stream\n"); } /* Process messages from GStreamer */ static gboolean handle_message (GstBus *bus, GstMessage *msg, CustomData *data) { GError *err; gchar *debug_info; switch (GST_MESSAGE_TYPE (msg)) { case GST_MESSAGE_ERROR: gst_message_parse_error (msg, &err, &debug_info); g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message); g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none"); g_clear_error (&err); g_free (debug_info); g_main_loop_quit (data->main_loop); break; case GST_MESSAGE_EOS: g_print ("End-Of-Stream reached.\n"); g_main_loop_quit (data->main_loop); break; case GST_MESSAGE_STATE_CHANGED: { GstState old_state, new_state, pending_state; gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state); if (GST_MESSAGE_SRC (msg) == GST_OBJECT (data->playbin)) { if (new_state == GST_STATE_PLAYING) { /* Once we are in the playing state, analyze the streams */ analyze_streams (data); } } } break; } /* We want to keep receiving messages */ return TRUE; } /* Process keyboard input */ static gboolean handle_keyboard (GIOChannel *source, GIOCondition cond, CustomData *data) { gchar *str = NULL; if (g_io_channel_read_line (source, &str, NULL, NULL, NULL) == G_IO_STATUS_NORMAL) { int index = atoi (str); if (index < 0 || index >= data->n_text) { g_printerr ("Index out of bounds\n"); } else { /* If the input was a valid subtitle stream index, set the current subtitle stream */ g_print ("Setting current subtitle stream to %d\n", index); g_object_set (data->playbin, "current-text", index, NULL); } } g_free (str); return TRUE; } ``` 这个例程将打开一个窗口并播放一个含有音频的电影。这段媒体是从互联网获取的,所以窗口可能需要一定的时间才会出现,这取决于你的网络连接速度。这段媒体含有的字幕流数量将在终端上打印,用户能够通过输入一个数字并按下enter按键,从一个字幕流切换到另一个字幕流。当然,切换会有一定的延迟。 请牢记这里没有延迟管理(缓冲),因此如果连接速度较慢,电影可能会在几秒钟后停止。可以阅读[Basic Tutorial 12: Streaming]()来解决这个问题。 ## Walkthrough 这个例程仅在[Playback tutorial 1: Playbin usage]()基础上做了部分修改,所以让我们关注这些修改。 ```c++ /* Set the subtitle URI to play and some font description */ g_object_set (data.playbin, "suburi", "https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer_gr.srt", NULL); g_object_set (data.playbin, "subtitle-font-desc", "Sans, 18", NULL); ``` 在设置好媒体文件URI之后,我们设置了`suburi`属性,它告诉`playbin`包含字幕流的文件位置。在这个例程中,媒体文件本身就包含了多个字幕流,因此通过`suburi`属性设置的字幕流将会被加入字幕列表中并成为当前选中的字幕。 注意字幕流的metadata(例如字幕语言)在容器文件中,因此外挂字幕将不带有metadata。当运行例程时你会发现第一个字幕流没有语言标签。 `subtitle-font-desc`属性允许指定渲染字幕的字体。这里使用`Pango`库来渲染字体,你可以查阅它的文档以了解如何指定字体,尤其是[pango-font-description-from-string](http://developer.gnome.org/pango/stable/pango-Fonts.html#pango-font-description-from-string)函数。 简而言之,`subtitle-font-desc`属性值的格式是`[FAMILY-LIST] [STYLE-OPTIONS] [SIZE]`。`FAMILY-LIST`为字体,以`,`与后面的值隔开;`STYLE-OPTIONS`是字体属性列表,包含字体风格,变体,粗细和字间距,属性值以空格符隔开;`SIZE`是字号,是一个十进制数。 以下是几个可用的例子: - sans bold 12 - serif, monospace bold italic condensed 16 - normal 10 常用的字体有:Normal,Sans,Monospace。 常用的格式有:Normal,Oblique(罗马斜体),Italic(意大利斜体)。 常用的粗细有:Ultra-Light,Light,Normal,Bold,Ultra-Bold,Heavy。 常用的变体有:Normal,Small_Caps (一种将小写字母以稍小的大写题目替换的格式)。 常用的字间距有:Ultra-Condensed,Extra-Condensed,Condensed,Semi-Condensed,Normal,Semi-Expanded,Expanded,Extra-Expanded,Ultra-Expanded。 ```c++ /* Set flags to show Audio, Video and Subtitles */ g_object_get (data.playbin, "flags", &flags, NULL); flags |= GST_PLAY_FLAG_VIDEO | GST_PLAY_FLAG_AUDIO | GST_PLAY_FLAG_TEXT; g_object_set (data.playbin, "flags", flags, NULL); ``` 我们设置`flags`以允许播放音频,视频和字幕。 例程剩下的内容和[Playback tutorial 1: Playbin usage]()一样,除了将键盘输入修改的属性从`current-audio`改为了`current-text`。和之前一样,切记流的改变不是立即生效的,因为在你切换的流显示之前仍然有大量的信息在pipeline中流动直到中止。 ## Conclusion 这篇教程展示了`playbin`如何处理字幕,无论是内嵌字幕还是外挂字幕: - `playbin`通过`n-text`和`current-text`属性来选择字幕。 - 外挂字幕可以通过`suburi`属性来加载指定。 - 字幕的外观可以通过`subtitle-font-desc`属性来自定义。 下一篇教程将展示如何改变播放的速度。 ================================================ FILE: deepstream/DeepStreamSample.md ================================================ # DeepStream学习拾遗 ## nvstreammux ![Gst-nvstreammux](images/DeepStreamSample/DS_plugin_gst-nvstreammux.png) `nvstreammux`插件能够接收多个输入源数据,将他们组成一个batch buffer并附加上一个NvDsBatchMeta数据结构输出。 在连接时由于无法预知source的信息,因此`nvstreammux`的`sink-pad`是使用`gst_element_request_pad()`动态生成的因此在link插件进行negotiation时必须要使用`gst_element_get_request_pad()`来获取新生成的pad进行连接。 ## memType for dGPU ### NvBufSurfaceMemType 根据DeepStram SDK API Reference中的[NvBufSurfaceMemType](https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/DeepStream_Development_Guide/baggage/group__ee__nvbufsurface.html#ga2832a9d266a0002a0d1bd8c0df37625b)可以知道在DeepStream中总共有7种memory,分别对应两个dGPU和Jetson两个平台的4种内存:system memory,host memory,device memory,unified memory(default memory为对应平台的device memory)。 其中system memory就是普通的CPU内存,在语言层面由malloc/free管理。 剩下三种都是Nvidia针对CPU和GPU相互进行数据传递设计的内存管理机制: 1)Pinned memory(host memory) CUDA提供了一系列Memory Management API,在早先的开发中为了完成CPU和GPU之间的数据传递我们需要进行大量的显式内存分配和拷贝工作。由于对于device(GPU)的malloc/free的运行开销非常大,因此更常用的做法是通过`cudaMemcpy()`接口完成两者之间数据的交互,尽可能复用已申请的device内存。 同时由于虚拟内存机制的存在,host memory在malloc之后直到被访问才会去触发一个page fault操作来申请真正的物理内存,这是OS决定的。因为CPU和GPU通过PCI-E总线进行连接通信,因此GPU无法控制pageable memory的分配和转移的时机,所以在申请device memory的时候,我们需要先进行一个数据预取的操作——CUDA驱动程序首先分配一个临时的locked-page,将主机数据复制到locked-page,然后将数据从locked-page传输到device memory,如下图所示: ![pinned-1024x541](images/DeepStreamSample/pinned-1024x541.jpg) 为了节省这次页表拷贝的开销,Nvidia设计了一种Pinned memory,pinned的含义是page-locked或non- pageable,这块内存是单独划分出来,能够被device直接访问的特殊内存,因此相对于pageable memory具有更高的带宽,但是由于占用了虚拟内存,过度使用会影响虚拟内存性能。Pinned Memory比pageable Memory的分配操作更加昂贵,但是他对大数据的传输有很好的表现。 2)Device memory: Device memory简单理解就是GPU内存,可以通过`cudaMalloc()`接口来申请。 频繁的进行memcpy对于追求性能的开发者来说是不可忍受的,因此CUDA还提供了一种zero-copy机制,用于内存减少拷贝的次数,避免device和host之间显式的数据传输。其本质就是将将pinned memory映射到device的地址空间: ```c++ __host__ cudaError_t cudaHostAlloc ( void** pHost, size_t size, unsigned int flags ) ``` `flags`为以下几个选项: - `cudaHostAllocDefault`:`cudaHostAlloc()`和`cudaMallocHost()`等价,申请的是pinned memory。 - `cudaHostAllocPortable`:分配的pinned memory对所有CUDA context都有效,而不是单单执行分配此操作的那个context或者说线程。 - `cudaHostAllocWriteCombined`:在特殊系统配置情况下使用的,这块pinned memory在PCIE上的传输更快,但是对于host自己来说,却没什么效率。所以该选项一般用来让host去写,然后device读。 - `cudaHostAllocMapped`:返回一个标准的zero-copy momery。可以用`cudaHostGetDevicePointer()`来获取device端的地址,从而直接操作device memory。 > 使用zero-copy memory来作为device memory的读写很频繁的那部分的补充是很不明智的,pinned这一类适合大数据传输,不适合频繁的操作,究其根本原因还是GPU和CPU之间低的可怜的传输速度,甚至,频繁读写情况下,zero-copy表现比global memory也要差不少。 当使用zero-copy来共享host和device数据时,我们必须同步Memory的获取,否则,device和host同时访问该Memory会导致未定义行为。 3)Ucnified memory Unified memory是CUDA 6.0引入的新特性,如前文所说,在CUDA 6.0之前,程序员必须在CPU和GPU两端都进行内存分配,并不断地进行手动copy,来保证两端的内存一致。 Unified memory在程序员的视角中,维护了一个统一的内存池,在CPU与GPU中共享。使用了单一指针进行托管内存,由系统来自动地进行内存迁移。 Unified memory简化了代码编写和内存模型,可以在CPU端和GPU端共用一个指针,不用单独各自分配空间。方便管理,减少了代码量,这种代码量的减少在类对象的拷贝上体现的尤为明显。 需要表明的一点是unified memroy依赖于unified virtual addressing(UMA),并且在实现上与zero-copy相似,所有的copy工作都在runtime阶段再处理,对程序员透明。关于UMA并没有进行过多的了解,简单来说就是CPU和GPU使用同一块连续的地址空间。 ### mapping of memory type(0) not supported. 知道了这几点之后再来看我前几天在映射上遇到的问题,我在Nvidia的developer forum上提了一个[issue](https://forums.developer.nvidia.com/t/nvbufsurface-mapping-of-memory-type-0-not-supported-on-tesla-4-dgpu/193977),moderator其实给我提供了一个思路让我去查看插件的内存类型属性的设置,我最终也根据这点发现了问题所在。但假如我们提前了解了Nvidia平台相关的内存管理模型,那么这个问题的解决思路将显而易见: 根据log可以知道我取出的NvBufSurface的memType是0——NVBUF_MEM_DEFAULT,对于dGPU来说那么就是device memory,根据`NvBufSurfaceMap()`的说明可以知道它对于dGPU仅支持NVBUF_MEM_CUDA_UNIFIED,因此可以查看各个DeepStream插件的memory type相关的属性设置情况,最后发现nvcideoconvert的nvbuf-memory-type属性默认值是0[NVBUF_MEM_DEFAULT]而不是3[NVBUF_MEM_UNIFIED],在我修改pipeline中该插件的默认值之后成功map并且完成对frame data的读写操作。 ## osd的几种思路 ### nvdsosd 作为一名基于GStreamer框架进行AI算法应用开发的工程师最常遇到的问题之一就是如何将AI算法的输出可视化,DeepStream提供了一个用于OSD的插件nvdsosd,具体使用可以参考我写的使用文档:[nvdsosd](https://ricardolu.gitbook.io/gstreamer/deepstream/nvdsosd)。 ### NV12 使用nvdsosd进行OSD的限制在于它的输入是RGBA格式,而编解码器通常只能处理YUV(NV12)格式的数据,因此在使用时需要使用nvvideoconvert进行格式转换,这其实会带来一部分性能损耗,因此还可以直接在解码出的NV12图像上OSD,只不过目前并没有什么图形库提供了全面的在NV12上进行OSD的API,我所实现的[draw-rectangle-on-YUV](https://github.com/gesanqiu/draw-rectangle-on-YUV)库只支持绘制line和rectangle。 想要在解码出的NV12上进行OSD需要解决一下几个问题: - 如何map: gst_buffer_map->NvBufSurface - GstBuffer的stride->NvBufSurface->surfacelistp[frame_meta->batch_id].pitch - NvBufSurface: how to extrace data form a NvBufSurface structure DeepStream提供了现成的API用来访问其封装的NvBufSurface数据: ```c++ { NvBufSurface *surface = NULL; NvDsMetaList *l_frame = NULL; NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buffer); // VideoPipeline* vp = (VideoPipeline*) user_data; GstMapInfo info; if (!gst_buffer_map(buffer, &info, (GstMapFlags) (GST_MAP_READ | GST_MAP_WRITE))) { TS_WARN_MSG_V ("WHY? WHAT PROBLEM ABOUT SYNC?"); gst_buffer_unmap(buffer, &info); return; } surface = (NvBufSurface *) info.data; TS_INFO_MSG_V ("surface type: %d", surface->memType); uint32_t frame_width, frame_height, frame_pitch; for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) { NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data); frame_width = surface->surfaceList[frame_meta->batch_id].width; frame_height = surface->surfaceList[frame_meta->batch_id].height; frame_pitch = surface->surfaceList[frame_meta->batch_id].pitch; if (NvBufSurfaceMap (surface, 0, 0, NVBUF_MAP_READ_WRITE)) { TS_ERR_MSG_V ("NVMM map failed."); return ; } // cv::Mat tmpMat(frame_height, frame_width, CV_8UC4, // surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0], // frame_picth); // std::vector oos = jobject->GetOsdObject(); // for (size_t i = 0; i < oos.size(); i++) { // if (oos[i].x_>=0 && oos[i].w_>0 && (oos[i].x_+oos[i].w_)=0 && oos[i].h_>0 && (oos[i].y_+oos[i].h_) (surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0]); m_YUVImgInfo.width = frame_pitch; m_YUVImgInfo.height = frame_height; m_YUVImgInfo.yuvType = TYPE_YUV420SP_NV12; std::vector oos = jobject->GetOsdObject(); for (size_t i = 0; i < oos.size(); i++) { if (oos[i].x_>=0 && oos[i].w_>0 && (oos[i].x_+oos[i].w_)=0 && oos[i].h_>0 && (oos[i].y_+oos[i].h_) h264parser -> nvv4l2decoder ->nvstreammux * nvinfer -> nvvideoconvert -> nvdsosd -> nveglglessink */ ``` 以这样一条pipeline为例,nvinfer将完成推理任务,在nvvideoconvert阶段将获得所有需要OSD的objectmetadata信息,这时候将你所想要额外fontparams添加到NvDisplayMeta中即可。 ```c++ // Build pipeline时为nvvideoconvert添加GstPadProbe { nvvidconv_sink_pad = gst_element_get_static_pad (nvvidconv, "sink"); if (!nvvidconv_sink_pad) g_print ("Unable to get sink pad\n"); else gst_pad_add_probe (nvvidconv_sink_pad, GST_PAD_PROBE_TYPE_BUFFER, nvvidconv_sink_pad_buffer_probe, NULL, NULL); } // static GstPadProbeReturn nvvidconv_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer u_data) { GstBuffer *buf = (GstBuffer *) info->data; NvDsObjectMeta *obj_meta = NULL; guint vehicle_count = 0; guint person_count = 0; guint face_count = 0; guint lp_count = 0; NvDsMetaList * l_frame = NULL; NvDsMetaList * l_obj = NULL; NvDsDisplayMeta *display_meta = NULL; // 遍历GstBuffer取出NvDsBatchMeta NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf); // 遍历batch中的所有帧 // 假如只有一路流(nvstreammux的batch-size=1),那么frame_meta_list长度为1 for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) { NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data); int offset = 0; // 遍历object_list,对于常规第三方算法手段可以省略,直接设置NvDsDisplayMeta即可 for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) { obj_meta = (NvDsObjectMeta *) (l_obj->data); /* Check that the object has been detected by the primary detector * and that the class id is that of vehicles/persons. */ if (obj_meta->unique_component_id == PRIMARY_DETECTOR_UID) { if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) vehicle_count++; if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) person_count++; } if (obj_meta->unique_component_id == SECONDARY_DETECTOR_UID) { if (obj_meta->class_id == SGIE_CLASS_ID_FACE) { face_count++; /* Print this info only when operating in secondary model. */ if (SECOND_DETECTOR_IS_SECONDARY) g_print ("Face found for parent object %p (type=%s)\n", obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]); } if (obj_meta->class_id == SGIE_CLASS_ID_LP) { lp_count++; /* Print this info only when operating in secondary model. */ if (SECOND_DETECTOR_IS_SECONDARY) g_print ("License plate found for parent object %p (type=%s)\n", obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]); } } } // 添加自定义的OSD信息,具体有哪些设置参数可以参考官方API文档 display_meta = nvds_acquire_display_meta_from_pool(batch_meta); NvOSD_TextParams *txt_params = &display_meta->text_params[0]; display_meta->num_labels = 1; txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN); offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count); offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count); offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Face = %d ", face_count); offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "License Plate = %d ", lp_count); /* Now set the offsets where the string should appear */ txt_params->x_offset = 10; txt_params->y_offset = 12; /* Font , font-color and font-size */ txt_params->font_params.font_name = "Serif"; txt_params->font_params.font_size = 10; txt_params->font_params.font_color.red = 1.0; txt_params->font_params.font_color.green = 1.0; txt_params->font_params.font_color.blue = 1.0; txt_params->font_params.font_color.alpha = 1.0; /* Text background color */ txt_params->set_bg_clr = 1; txt_params->text_bg_clr.red = 0.0; txt_params->text_bg_clr.green = 0.0; txt_params->text_bg_clr.blue = 0.0; txt_params->text_bg_clr.alpha = 1.0; nvds_add_display_meta_to_frame(frame_meta, display_meta); } g_print ("Frame Number = %d Vehicle Count = %d Person Count = %d" " Face Count = %d License Plate Count = %d\n", frame_number, vehicle_count, person_count, face_count, lp_count); frame_number++; return GST_PAD_PROBE_OK; } ``` ## FAQ 上述示例Pipeline在显示上使用了nveglglessink插件,这取决于开发平台是否支持显示,例如在Tesla这类计算卡平台上使用docker container环境开发时默认无法显示,具体可以参考https://forums.developer.nvidia.com/t/cugraphicsglregisterbuffer-failed-with-error-219-gst-eglglessink-cuda-init-texture-1/121833这个issue,据CE的回复需要安装Nvidia的Display Driver之后配置Virtual Display,由于我所用的T4服务器是公司资产,无法确定这么做的风险,所以没有尝试,而Jetson平台几乎都支持GPU Display,因此没有这种问题。 ================================================ FILE: postscript.md ================================================ # 后记 从想法诞生到实践到写下这篇后记不过短短半个月,虽然在翻译Basic tutorial的过程中规划了越来越多的翻译内容,但其中很多Core Library API Reference已经超出了我现在的理解范围,大多数时候我都是有针对性的去查阅资料并进行调试。 这算是我第二个完成度比较高的教程,上一个是[CMake](https://ricardolu.gitbook.io/trantor/cmake-in-action)(虽然还差实例,但是repo是才参考了开源),目前完成度大概在60%,争取在国庆之前完成到80%,我也想按计划完善这篇教程,但是能力有限,我已经尽可能快的将所有开发教程实例化并且调试上传。奈何事物变化太快,我在犹豫了两个月之后还是决定离职,这意味着我将离开使用了一年多的高通开发平台,准确来说为了个人发展,我将离开音视频应用开发领域,甚至在可见的未来我的生活中将没有嵌入式这三个字。 求学数载终于有能够拿的出手的东西,虽然在大佬面前微不足道,但也是我这么多年以来唯一的成品。过去的一年三个月说长不长说短不短,这期间我总是带着一丝自傲抱怨、求全苛责他人而不知自谦,这不是好习惯,但我觉得这在另一方面也不断的促使我成长,追求更好的环境,更好的自己。大丈夫能屈能伸,talk is too much,show me your codes。 2021.9.8 22:43 Ricardo Lu. 时隔近两月,近期为了公司展览将一些工作迁移到Nvidia平台,所以再次捡起了DeepStream,于此更新nvdsosd,不得不说Nvidia平台的工作完成度非常的高,对比之下高通仿佛啥也做不好,有入手一个Jetson NX的想法了。 2021.10.31 13:29 Ricardo Lu. 这两个月在忙着填基础知识的坑,最近回到GStreamer上看了一些官方文档,目前打算这个月完结这篇教程,最后的一部分将基于draw-rectangle-nv12这个库实现一个osd plugin,然后过年期间将把教程和例程做一下整理。本来打算再花时间研究一下playback component,但之后要转到Go语言阵营了,过去一年发生了太多事,我的心态发生了很大的变化,我想我的嵌入式生涯也确实该结束了。 2022.1.2 17:16 Ricardo Lu. 最终还是没忍住把Playback的教程跳着看完了,随缘翻译了五篇,这一次看教程的速度相较于去年八月快了很多,其实只是看的话一天就看完了,翻译起来由于懒癌发作,实际使用时间大概是阅读的两倍。接下来由于工作上的安排osd-plugin可能会delay了,因为它远比我预想的要庞大,但我确保不会割掉这个计划,毕竟我其实已经有几个月没有写代码了,用这个项目作为复建也还不错。 2021.1.7 16:48 Ricardo Lu. ================================================ FILE: qti_gst_plugins/qtioverlay/README.md ================================================ # qtioverlay ## Overview 高通的[qtioverlay](https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-reference-manual/application-semantics/gstreamer-plugins/qtioverlay)插件可以在YUV(NV12/NV21)图片帧上绘制RGB和YUV,它支持在YUV帧顶层绘制静态纹理图,回归边框,用户自定义文本和日期/时间,qtioverlay具备以下特性: - YUV帧可以来自于摄像头视频流或视频文件的解码; - 绘制信息可以是机器学习算法输出的Metadata或直接使用GST Propertie来设置的用户自定义文本; - Machine Learning Metadata支持检测、分割、分类和拍照定位等算法输出; - 一个meatadata关联一个overlay item,支持同时绘制多个不同类型overlay item,metadata的个数没有上限,但数量过多会影响性能; - 检测算法的metadata数据结构被设计为bbox和label信息; - 分类算法的metada数据结构则仅有label和confidence信息(分类结果通常作为检测结果的一部分,即分类的数据结构为检测的数据结构结构体的一个成员); - 支持overlay图片的格式为RGBA; - 日期/时间和用户自定义文本内容只能通过设置GST Properties的方式来实现,也即这部分信息将和GST Pipeline绑定,一旦pipeline初始化完毕就无法改变。 **qtioverlay插件源码地址:**[**qtioverlay**](https://github.com/gesanqiu/gstreamer-example/tree/main/qti_gst_plugins/qtioverlay) **例程地址:**[**qtioverlay-example**](https://github.com/gesanqiu/gstreamer-example/tree/main/qti_gst_plugins/qtioverlay/qtioverlay-example) ## Properties - `name`:qtioverlay的name。 - `qos`:时间服务质量。 - `overlay-text`:用户自定义文本字符串。 - `overlay-data`:overlay日期字符串。 - `bbox-color`:overlay ML Metadata回归边框的颜色,RGBA(32位无符号数),默认为0x0000CCFF。 - `date-color`:overlay日期颜色,RGBA(32位无符号数),默认为0xFF0000FF。 - `text-color`:用户自定义文本颜色,RGBA(32位无符号数),默认为0xFFFF00FF。 - `pose-color`:overlay ML Metadata PostNet Type的颜色,RGBA(32位无符号数),默认为0x33CC00FF。 以上都是`qtioverlay`的GST Properties,一旦设置就与pipeline绑定,无法动态修改,并不适合开发使用,因此在开发中更多使用的是ML Metadata来完成绘制信息的传递。 ## Develop Guide `qtioverlay`所依赖的库主要有两个`qtimlmeta`和`qmmf_overlay`,`libqtimlmeta.so`为ML Metadata的实现,`libqmmf_overlay.so`为绘制的实现。 ### qtimlmeta - GstMLDetactionMeta ```c /** * GstMLBoundingBox: * @x: horizontal start position * @y: vertical start position * @width: active window width * @height: active window height * * Bounding box properties */ struct _GstMLBoundingBox { guint x; guint y; guint width; guint height; }; /** * GstMLClassificationResult: * @name: name for given object * @confidence: confidence for given object * * Name and confidence handle */ struct _GstMLClassificationResult { gchar *name; gfloat confidence; }; /** * GstMLDetectionMeta: * @parent: parent #GstMeta * @box: bounding box coordinates * @box_info: list of GstMLClassificationResult which handle names and confidences * * Machine learning SSD models properties */ struct _GstMLDetectionMeta { GstMeta parent; GstMLBoundingBox bounding_box; GSList *box_info; }; ``` - GstClassificationResult ```c /** * GstMLClassificationMeta: * @parent: parent #GstMeta * @result: name and confidence * @location: location in frame of location is CUSTOM then x/y are considered * @x: horizontal start position if location is CUSTOM * @y: vertical start position if location is CUSTOM * * Machine learning classification models properties */ struct _GstMLClassificationMeta { GstMeta parent; GstMLClassificationResult result; } ``` - gst_buffer_add_detection_meta() ```c /** * gst_buffer_add_detection_meta: * @buffer: the buffer new metadata belongs to * * Creates new bounding detection entry and returns pointer to new * entry. Metadata payload is not input parameter in order to avoid * unnecessary copy of data. * */ GST_EXPORT GstMLDetectionMeta * gst_buffer_add_detection_meta (GstBuffer * buffer); GstMLDetectionMeta * gst_buffer_add_detection_meta (GstBuffer * buffer) { g_return_val_if_fail (buffer != NULL, NULL); GstMLDetectionMeta *meta = (GstMLDetectionMeta *) gst_buffer_add_meta (buffer, GST_ML_DETECTION_INFO, NULL); return meta; } ``` - 资源回收 ```c static void gst_ml_detection_free (GstMeta *meta, GstBuffer *buffer) { GstMLDetectionMeta *bb_meta = (GstMLDetectionMeta *) meta; g_slist_free_full(bb_meta->box_info, free); GST_DEBUG ("free detection meta ts: %llu ", buffer->pts); } ``` GstMeta将随着GstBuffer的释放而自动释放,因此这部分资源的释放不需要用户来手动操作。 ### ML Metadata ```c { GstMLDetectionMeta *meta = gst_buffer_add_detection_meta(buffer); if (!meta) { TS_ERR_MSG_V ("Failed to create metadata"); return ; } ​ GstMLClassificationResult *box_info = (GstMLClassificationResult*)malloc( sizeof(GstMLClassificationResult)); ​ uint32_t label_size = g_labels[results->at(i).label].size() + 1; box_info->name = (char *)malloc(label_size); snprintf(box_info->name, label_size, "%s", g_labels[results->at(i).label].c_str()); ​ box_info->confidence = results->at(i).confidence; meta->box_info = g_slist_append (meta->box_info, box_info); ​ meta->bounding_box.x = results->at(i).rect[0]; meta->bounding_box.y = results->at(i).rect[1]; meta->bounding_box.width = results->at(i).rect[2]; meta->bounding_box.height = results->at(i).rect[3]; } ``` ### qtiverlay ```c /** * gst_overlay_apply_ml_bbox_item: * @gst_overlay: context * @metadata: machine learning metadata entry * @item_id: pointer to overlay item instance id * * Converts GstMLDetectionMeta metadata to overlay configuration and applies it * as bounding box overlay. * * Return true if succeed. */ static gboolean gst_overlay_apply_ml_bbox_item (GstOverlay * gst_overlay, gpointer metadata, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (metadata != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); GstMLDetectionMeta * meta = (GstMLDetectionMeta *) metadata; GstMLClassificationResult * result = (GstMLClassificationResult *) g_slist_nth_data (meta->box_info, 0); GstVideoRectangle bbox; bbox.x = meta->bounding_box.x; bbox.y = meta->bounding_box.y; bbox.w = meta->bounding_box.width; bbox.h = meta->bounding_box.height; if (gst_overlay->meta_color) gst_overlay->bbox_color = meta->bbox_color; return gst_overlay_apply_bbox_item (gst_overlay, &bbox, result->name, gst_overlay->bbox_color, item_id); } ``` 在`gst_overlay_apply_bbox_item`中最终使用`gstoverlay->overlay`的成员函数`CreateOverlayItem`和`EnableOverlayItem`进行绘图,这部分的实现在`qmmf_overlay`中的 ### 增加meta-color属性 ```c meta-color : Bounding box overlay use meta data color flags: readable, writable Boolean. Default: false ``` 在`_GstMLDetectionMeta`结构体中增加一个`guint`类型的32位无符号整型变量`bbox_color`用于表示`qtipverlay`颜色所需的RGBA值,在`gstqtioverlay.cc`也即`qtioverlay`的源码中增加一个`gboolean`类型的`meta-color`变量用于判断是使用`bbox-color`属性设置的固定边框颜色还是从ML Metadata中取颜色值动态改变边框颜色。 ```c // modify of gstoverlay.cc static void gst_overlay_class_init (GstOverlayClass * klass) {​ g_object_class_install_property (gobject, PROP_OVERLAY_META_COLOR, g_param_spec_boolean ("meta-color", "Meta color", "Bounding box overlay use meta data color", DEFAULT_PROP_OVERLAY_META_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); } // 在gst_overlay_init()中增加初始化 // 在gst_overlay_get_property()和gst_overlay_set_property()增加相关的操作属性的方法 g_value_set_boolean (value, gst_overlay->meta_color); gst_overlay->meta_color = g_value_get_boolean (value); // 在绘画函数gst_overlay_apply_ml_bbox_item()中增加bbox_color的赋值语句 if (gst_overlay->meta_color) gst_overlay->bbox_color = meta->bbox_color; // 在应用生成ML Meta的时候将BBox RGB各通道值转换为一个表示RGBA四通道的32位无符号数 meta->bbox_color = (r << 24) + (g << 16) + (b << 8) + 0xFF; ``` ### 增加meta-thick属性 在gstoverlay中gst_overlay_apply_ml_bbox_item调用gst_overlay_apply_bbox_item完成bbox item的绘制,: ```c // gst_overlay_apply_ml_bbox_item gst_overlay_apply_bbox_item (gst_overlay, &bbox, result->name, gst_overlay->bbox_color, item_id); // gst_overlay_apply_bbox_item() gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); gst_overlay->overlay->EnableOverlayItem (*item_id); // gstoverlay.cc的实现依赖于libqmmf_overlay.so // ==============>qmmf_overlay.cc<===============// // EnableOverlayItem() overlayItem->Activate(true); /* ======================================================================> */ // 在gstoverlay.cc中初始化qtioverlay这个插件时将gst_overlay_transform_frame_ip // 注册为VideoFilter的transform_frame_ip回调用于完成GstBuffer中frame的转换 filter->transform_frame_ip = GST_DEBUG_FUNCPTR (gst_overlay_transform_frame_ip); // gst_overlay_transform_frame_ip()调用 res = gst_overlay_apply_overlay (gst_overlay, frame); /* <====================================================================== */ // ==============>qmmf_overlay.cc<===============// // 绘制则是在ApplyOverlay()中的 // 首先判断OverlayItem->IsActive() // 绘制 c2dDraw(target_c2dsurface_id_, 0, 0, 0, 0, c2d_objects.objects, numActiveOverlays); ``` 可以看到在整个流程中流没有关于绘制线条thick有关的参数,而最终的绘制工作交给了C2D来完成: ```c++ // copybit_c2d.cpp ctx->libc2d2 = ::dlopen("libC2D2.so", RTLD_NOW); *(void **)&LINK_c2dDraw = ::dlsym(ctx->libc2d2, "c2dDraw"); ``` 这部分使用了动态库的显示调用,高通并没有开源C2D的源码,我在整个overlay相关的源码中都没有找到thick相关的参数,我原以为会开发一个宏定义,但是并没有,因此这部分调查告一段落。 **注:**在实际使用过程中,如果是绘制一直变化的bbox_info,绘制效果是能够接受的;但是测试过如果一直画同一个bbox_inffo,透明度会比较差,整个颜色比较淡。 **注:**`libqmmf_overlay.so`的实现依赖于C2D(GPU加速),这部分我并未深入了解过,因此不做过多介绍。 **后记:**关于qtioverlay的介绍至此就结束了,插件或多或少有一些缺陷,开发也不可避免要妥协。好在尝试解决问题的过程始终是有趣的,但是还是希望高通能够更好的维护和完善自家平台的工具,让开发者有更好的开发体验。 ================================================ FILE: qti_gst_plugins/qtioverlay/qtimlmeta/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 3.8.2) project(GST_PLUGIN_QTI_OSS_OVERLAY VERSION ${GST_PLUGINS_QTI_OSS_VERSION} LANGUAGES C CXX ) set(CMAKE_INCLUDE_CURRENT_DIR ON) include_directories(${SYSROOT_INCDIR}) link_directories(${SYSROOT_LIBDIR}) find_package(PkgConfig) # Get the pkgconfigs exported by the automake tools pkg_check_modules(GST REQUIRED gstreamer-1.0>=${GST_VERSION_REQUIRED}) pkg_check_modules(GST_ALLOC REQUIRED gstreamer-allocators-1.0>=${GST_VERSION_REQUIRED}) # Common compiler flags. set(CMAKE_CXX_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -Werror") # GStreamer machine learning metadata. set(GST_QTI_ML_META qtimlmeta) add_library(${GST_QTI_ML_META} SHARED ml_meta.c ) target_include_directories(${GST_QTI_ML_META} PUBLIC ${GST_INCLUDE_DIRS} ) target_include_directories(${GST_QTI_ML_META} PRIVATE ${KERNEL_BUILDDIR}/usr/include ) target_link_libraries(${GST_QTI_ML_META} PRIVATE ${GST_LIBRARIES} ${GST_ALLOC_LIBRARIES} ${GST_VIDEO_LIBRARIES} ) install(TARGETS ${GST_QTI_ML_META} DESTINATION lib OPTIONAL) FILE(GLOB INCLUDE_FILES "ml_meta.h") INSTALL(FILES ${INCLUDE_FILES} DESTINATION include/ml-meta) ================================================ FILE: qti_gst_plugins/qtioverlay/qtimlmeta/ml_meta.c ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "ml_meta.h" #ifndef GST_DISABLE_GST_DEBUG #define GST_CAT_DEFAULT ensure_debug_category() static GstDebugCategory * ensure_debug_category (void) { static gsize category = 0; if (g_once_init_enter (&category)) { gsize cat_done; cat_done = (gsize) _gst_debug_category_new("gstmlmeta", 0, "gstmlmeta"); g_once_init_leave (&category, cat_done); } return (GstDebugCategory *) category; } #else #define ensure_debug_category() /* NOOP */ #endif /* GST_DISABLE_GST_DEBUG */ static gboolean gst_ml_detection_init (GstMeta * meta, gpointer params, GstBuffer * buffer) { GstMLDetectionMeta *bb_meta = (GstMLDetectionMeta *) meta; bb_meta->box_info = NULL; memset(&bb_meta->bounding_box, 0, sizeof(bb_meta->bounding_box)); return TRUE; } static void gst_ml_detection_free (GstMeta *meta, GstBuffer *buffer) { GstMLDetectionMeta *bb_meta = (GstMLDetectionMeta *) meta; g_slist_free_full(bb_meta->box_info, free); GST_DEBUG ("free detection meta ts: %llu ", buffer->pts); } GType gst_ml_detection_get_type (void) { static volatile GType type = 0; static const gchar *tags[] = { NULL }; if (g_once_init_enter (&type)) { GType _type = gst_meta_api_type_register ("GstMLDetectionMetaAPI", tags); g_once_init_leave (&type, _type); } return type; } const GstMetaInfo * gst_ml_detection_get_info (void) { static const GstMetaInfo *ml_meta_info = NULL; if (g_once_init_enter ((GstMetaInfo **) & ml_meta_info)) { const GstMetaInfo *meta = gst_meta_register (GST_ML_DETECTION_API_TYPE, "GstMLDetectionMeta", (gsize) sizeof (GstMLDetectionMeta), (GstMetaInitFunction) gst_ml_detection_init, (GstMetaFreeFunction) gst_ml_detection_free, (GstMetaTransformFunction) NULL); g_once_init_leave ((GstMetaInfo **) & ml_meta_info, (GstMetaInfo *) meta); } return ml_meta_info; } static gboolean gst_ml_segmentation_init (GstMeta * meta, gpointer params, GstBuffer * buffer) { GstMLSegmentationMeta *img_meta = (GstMLSegmentationMeta *) meta; img_meta->img_buffer = NULL; img_meta->img_width = 0; img_meta->img_height = 0; img_meta->img_size = 0; img_meta->img_format = GST_VIDEO_FORMAT_UNKNOWN; img_meta->img_stride = 0; return TRUE; } static void gst_ml_segmentation_free (GstMeta *meta, GstBuffer *buffer) { GstMLSegmentationMeta *img_meta = (GstMLSegmentationMeta *) meta; if (img_meta->img_buffer) { free(img_meta->img_buffer); img_meta->img_buffer = NULL; } GST_DEBUG ("free segmentation meta ts: %llu ", buffer->pts); } GType gst_ml_segmentation_get_type (void) { static volatile GType type = 0; static const gchar *tags[] = { NULL }; if (g_once_init_enter (&type)) { GType _type = gst_meta_api_type_register ("GstMLSegmentationMetaAPI", tags); g_once_init_leave (&type, _type); } return type; } const GstMetaInfo * gst_ml_segmentation_get_info (void) { static const GstMetaInfo *ml_meta_info = NULL; if (g_once_init_enter ((GstMetaInfo **) & ml_meta_info)) { const GstMetaInfo *meta = gst_meta_register (GST_ML_SEGMENTATION_API_TYPE, "GstMLSegmentationMeta", (gsize) sizeof (GstMLSegmentationMeta), (GstMetaInitFunction) gst_ml_segmentation_init, (GstMetaFreeFunction) gst_ml_segmentation_free, (GstMetaTransformFunction) NULL); g_once_init_leave ((GstMetaInfo **) & ml_meta_info, (GstMetaInfo *) meta); } return ml_meta_info; } static gboolean gst_ml_classification_init (GstMeta * meta, gpointer params, GstBuffer * buffer) { GstMLClassificationMeta *l_meta = (GstMLClassificationMeta *) meta; l_meta->result.name = NULL; l_meta->result.confidence = 0.0; return TRUE; } static void gst_ml_classification_free (GstMeta *meta, GstBuffer *buffer) { GstMLClassificationMeta *l_meta = (GstMLClassificationMeta *) meta; if (l_meta->result.name) { free(l_meta->result.name); l_meta->result.name = NULL; } GST_DEBUG ("free classification meta ts: %llu ", buffer->pts); } GType gst_ml_classification_get_type (void) { static volatile GType type = 0; static const gchar *tags[] = { NULL }; if (g_once_init_enter (&type)) { GType _type = gst_meta_api_type_register ("GstMLClassificationMetaAPI", tags); g_once_init_leave (&type, _type); } return type; } const GstMetaInfo * gst_ml_classification_get_info (void) { static const GstMetaInfo *ml_meta_info = NULL; if (g_once_init_enter ((GstMetaInfo **) & ml_meta_info)) { const GstMetaInfo *meta = gst_meta_register (GST_ML_CLASSIFICATION_API_TYPE, "GstMLClassificationMeta", (gsize) sizeof (GstMLClassificationMeta), (GstMetaInitFunction) gst_ml_classification_init, (GstMetaFreeFunction) gst_ml_classification_free, (GstMetaTransformFunction) NULL); g_once_init_leave ((GstMetaInfo **) & ml_meta_info, (GstMetaInfo *) meta); } return ml_meta_info; } static gboolean gst_ml_posenet_init (GstMeta * meta, gpointer params, GstBuffer * buffer) { GstMLPoseNetMeta *p_meta = (GstMLPoseNetMeta *) meta; memset(p_meta->points, 0, sizeof(p_meta->points)); p_meta->score = 0.0; return TRUE; } GType gst_ml_posenet_get_type (void) { static volatile GType type = 0; static const gchar *tags[] = { NULL }; if (g_once_init_enter (&type)) { GType _type = gst_meta_api_type_register ("GstMLPoseNetMetaAPI", tags); g_once_init_leave (&type, _type); } return type; } const GstMetaInfo * gst_ml_posenet_get_info (void) { static const GstMetaInfo *ml_meta_info = NULL; if (g_once_init_enter ((GstMetaInfo **) & ml_meta_info)) { const GstMetaInfo *meta = gst_meta_register (GST_ML_POSENET_API_TYPE, "GstMLPoseNetMeta", (gsize) sizeof (GstMLPoseNetMeta), (GstMetaInitFunction) gst_ml_posenet_init, (GstMetaFreeFunction) NULL, (GstMetaTransformFunction) NULL); g_once_init_leave ((GstMetaInfo **) & ml_meta_info, (GstMetaInfo *) meta); } return ml_meta_info; } GstMLDetectionMeta * gst_buffer_add_detection_meta (GstBuffer * buffer) { g_return_val_if_fail (buffer != NULL, NULL); GstMLDetectionMeta *meta = (GstMLDetectionMeta *) gst_buffer_add_meta (buffer, GST_ML_DETECTION_INFO, NULL); return meta; } GSList * gst_buffer_get_detection_meta (GstBuffer * buffer) { gpointer state = NULL; GstMeta *meta = NULL; const GstMetaInfo *info = GST_ML_DETECTION_INFO; g_return_val_if_fail (buffer != NULL, NULL); GSList *meta_list = NULL; while ((meta = gst_buffer_iterate_meta (buffer, &state))) { if (meta->info->api == info->api) { meta_list = g_slist_prepend(meta_list, meta); } } return meta_list; } GstMLSegmentationMeta * gst_buffer_add_segmentation_meta (GstBuffer * buffer) { g_return_val_if_fail (buffer != NULL, NULL); GstMLSegmentationMeta *meta = (GstMLSegmentationMeta *) gst_buffer_add_meta (buffer, GST_ML_SEGMENTATION_INFO, NULL); return meta; } GSList * gst_buffer_get_segmentation_meta (GstBuffer * buffer) { gpointer state = NULL; GstMeta *meta = NULL; const GstMetaInfo *info = GST_ML_SEGMENTATION_INFO; g_return_val_if_fail (buffer != NULL, NULL); GSList *meta_list = NULL; while ((meta = gst_buffer_iterate_meta (buffer, &state))) { if (meta->info->api == info->api) { meta_list = g_slist_prepend(meta_list, meta); } } return meta_list; } GstMLClassificationMeta * gst_buffer_add_classification_meta (GstBuffer * buffer) { g_return_val_if_fail (buffer != NULL, NULL); GstMLClassificationMeta *meta = (GstMLClassificationMeta *) gst_buffer_add_meta (buffer, GST_ML_CLASSIFICATION_INFO, NULL); return meta; } GSList * gst_buffer_get_classification_meta (GstBuffer * buffer) { gpointer state = NULL; GstMeta *meta = NULL; const GstMetaInfo *info = GST_ML_CLASSIFICATION_INFO; g_return_val_if_fail (buffer != NULL, NULL); GSList *meta_list = NULL; while ((meta = gst_buffer_iterate_meta (buffer, &state))) { if (meta->info->api == info->api) { meta_list = g_slist_prepend(meta_list, meta); } } return meta_list; } GstMLPoseNetMeta * gst_buffer_add_posenet_meta (GstBuffer * buffer) { g_return_val_if_fail (buffer != NULL, NULL); GstMLPoseNetMeta *meta = (GstMLPoseNetMeta *) gst_buffer_add_meta (buffer, GST_ML_POSENET_INFO, NULL); return meta; } GSList * gst_buffer_get_posenet_meta (GstBuffer * buffer) { gpointer state = NULL; GstMeta *meta = NULL; const GstMetaInfo *info = GST_ML_POSENET_INFO; g_return_val_if_fail (buffer != NULL, NULL); GSList *meta_list = NULL; while ((meta = gst_buffer_iterate_meta (buffer, &state))) { if (meta->info->api == info->api) { meta_list = g_slist_prepend(meta_list, meta); } } return meta_list; } ================================================ FILE: qti_gst_plugins/qtioverlay/qtimlmeta/ml_meta.h ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef __GST_ML_META_H__ #define __GST_ML_META_H__ #include #include G_BEGIN_DECLS typedef struct _GstMLClassificationResult GstMLClassificationResult; typedef struct _GstMLBoundingBox GstMLBoundingBox; typedef struct _GstMLDetectionMeta GstMLDetectionMeta; typedef struct _GstMLSegmentationMeta GstMLSegmentationMeta; typedef struct _GstMLClassificationMeta GstMLClassificationMeta; typedef struct _GstMLKeyPoint GstMLKeyPoint; typedef struct _GstMLPose GstMLPose; typedef struct _GstMLPoseNetMeta GstMLPoseNetMeta; #define GST_ML_DETECTION_API_TYPE (gst_ml_detection_get_type()) #define GST_ML_DETECTION_INFO (gst_ml_detection_get_info()) #define GST_ML_SEGMENTATION_API_TYPE (gst_ml_segmentation_get_type()) #define GST_ML_SEGMENTATION_INFO (gst_ml_segmentation_get_info()) #define GST_ML_CLASSIFICATION_API_TYPE (gst_ml_classification_get_type()) #define GST_ML_CLASSIFICATION_INFO (gst_ml_classification_get_info()) #define GST_ML_POSENET_API_TYPE (gst_ml_posenet_get_type()) #define GST_ML_POSENET_INFO (gst_ml_posenet_get_info()) /** * GstMLBoundingBox: * @x: horizontal start position * @y: vertical start position * @width: active window width * @height: active window height * Bounding box properties */ struct _GstMLBoundingBox { guint x; guint y; guint width; guint height; }; /** * GstMLClassificationResult: * @name: name for given object * @confidence: confidence for given object * * Name and confidence handle */ struct _GstMLClassificationResult { gchar *name; gfloat confidence; }; /** * GstMLDetectionMeta: * @parent: parent #GstMeta * @box: bounding box coordinates * @box_info: list of GstMLClassificationResult which handle names and confidences * * Machine learning SSD models properties */ struct _GstMLDetectionMeta { GstMeta parent; GstMLBoundingBox bounding_box; GSList *box_info; guint bbox_color; }; /** * GstMLSegmentationMeta: * @parent: parent #GstMeta * @img_buffer: pointer to segmentation image data * @img_width: the segmentation image width in pixels * @img_height: the segmentation image height in pixels * @img_size: size of image buffer in bytes * @img_format: the segmentation image pixel format * @img_stride: the segmentation image bytes per line * * Machine learning segmentation image models properties */ struct _GstMLSegmentationMeta { GstMeta parent; gpointer img_buffer; guint img_width; guint img_height; guint img_size; GstVideoFormat img_format; guint img_stride; }; /** * GstMLClassificationMeta: * @parent: parent #GstMeta * @result: name and confidence * @location: location in frame of location is CUSTOM then x/y are considered * @x: horizontal start position if location is CUSTOM * @y: vertical start position if location is CUSTOM * * Machine learning classification models properties */ struct _GstMLClassificationMeta { GstMeta parent; GstMLClassificationResult result; }; /** * GstMLKeyPoints - PoseNet key points */ enum GstMLKeyPointsType{ NOSE, LEFT_EYE, RIGHT_EYE, LEFT_EAR, RIGHT_EAR, LEFT_SHOULDER, RIGHT_SHOULDER, LEFT_ELBOW, RIGHT_ELBOW, LEFT_WRIST, RIGHT_WRIST, LEFT_HIP, RIGHT_HIP, LEFT_KNEE, RIGHT_KNEE, LEFT_ANKLE, RIGHT_ANKLE, KEY_POINTS_COUNT }; /** * GstMLKeyPoint: * @x: x coordinate * @y: y coordinate * @score: score of given pose * * Machine learning PoseNet poses */ struct _GstMLKeyPoint { gint x; gint y; gfloat score; }; /** * GstMLPoseNetMeta: * @parent: parent #GstMeta * @points: array of key points coordinates and score. * Key points order corresponds to GstMLKeyPointsType. * @score: score of all poses * * Machine learning PoseNet models properties */ struct _GstMLPoseNetMeta { GstMeta parent; GstMLKeyPoint points[KEY_POINTS_COUNT]; gfloat score; }; GType gst_ml_detection_get_type (void); const GstMetaInfo * gst_ml_detection_get_info (void); GType gst_ml_segmentation_get_type (void); const GstMetaInfo * gst_ml_segmentation_get_info (void); GType gst_ml_classification_get_type (void); const GstMetaInfo * gst_ml_classification_get_info (void); GType gst_ml_posenet_get_type (void); const GstMetaInfo * gst_ml_posenet_get_info (void); /** * gst_buffer_add_detection_meta: * @buffer: the buffer new metadata belongs to * * Creates new bounding detection entry and returns pointer to new * entry. Metadata payload is not input parameter in order to avoid * unnecessary copy of data. * */ GST_EXPORT GstMLDetectionMeta * gst_buffer_add_detection_meta (GstBuffer * buffer); /** * gst_buffer_get_detection_meta: * @buffer: the buffer metadata comes from * * Returns list of bounding detection entries. List payload should be * considered as GstMLDetectionMeta. Caller is supposed to free the list. * */ GST_EXPORT GSList * gst_buffer_get_detection_meta (GstBuffer * buffer); /** * gst_buffer_add_segmentation_meta: * @buffer: the buffer new metadata belongs to * * Creates new segmentation metadata entry and returns pointer to new * entry. Metadata payload is not input parameter in order to avoid * unnecessary copy of data. * */ GST_EXPORT GstMLSegmentationMeta * gst_buffer_add_segmentation_meta (GstBuffer * buffer); /** * gst_buffer_get_segmentation_meta: * @buffer: the buffer metadata comes from * * Returns list of segmentation metadata entries. List payload should be * considered as GstMLSegmentationMeta. Caller is supposed to free the list. * */ GST_EXPORT GSList * gst_buffer_get_segmentation_meta (GstBuffer * buffer); /** * gst_buffer_add_classification_meta: * @buffer: the buffer new metadata belongs to * * Creates new classification metadata entry and returns pointer to new * entry. Metadata payload is not input parameter in order to avoid * unnecessary copy of data. * */ GST_EXPORT GstMLClassificationMeta * gst_buffer_add_classification_meta (GstBuffer * buffer); /** * gst_buffer_get_classification_meta: * @buffer: the buffer metadata comes from * * Returns list of classification metadata entries. List payload should be * considered as GstMLClassificationMeta. Caller is supposed to free the list. * */ GST_EXPORT GSList * gst_buffer_get_classification_meta (GstBuffer * buffer); /** * gst_buffer_add_posenet_meta: * @buffer: the buffer new metadata belongs to * * Creates new posenet metadata entry and returns pointer to new * entry. Metadata payload is not input parameter in order to avoid * unnecessary copy of data. * */ GST_EXPORT GstMLPoseNetMeta * gst_buffer_add_posenet_meta (GstBuffer * buffer); /** * gst_buffer_get_posenet_meta: * @buffer: the buffer metadata comes from * * Returns list of posenet metadata entries. List payload should be * considered as GstMLPoseNetMeta. Caller is supposed to free the list. * */ GST_EXPORT GSList * gst_buffer_get_posenet_meta (GstBuffer * buffer); G_END_DECLS #endif /* __GST_ML_META_H__ */ ================================================ FILE: qti_gst_plugins/qtioverlay/qtioverlay/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 3.8.2) project(GST_PLUGIN_QTI_OSS_OVERLAY VERSION ${GST_PLUGINS_QTI_OSS_VERSION} LANGUAGES C CXX ) set(CMAKE_INCLUDE_CURRENT_DIR ON) include_directories(${SYSROOT_INCDIR}) link_directories(${SYSROOT_LIBDIR}) find_package(PkgConfig) # Get the pkgconfigs exported by the automake tools pkg_check_modules(GST REQUIRED gstreamer-1.0>=${GST_VERSION_REQUIRED}) pkg_check_modules(GST_ALLOC REQUIRED gstreamer-allocators-1.0>=${GST_VERSION_REQUIRED}) pkg_check_modules(GST_VIDEO REQUIRED gstreamer-video-1.0>=${GST_VERSION_REQUIRED}) # Generate configuration header file. configure_file(config.h.in config.h @ONLY) include_directories(${CMAKE_CURRENT_BINARY_DIR}) # Precompiler definitions. add_definitions(-DHAVE_CONFIG_H) # Common compiler flags. set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -Werror") set(CMAKE_CXX_FLAGS "${CMAKE_C_FLAGS} -DUSE_SKIA=0 -DUSE_CAIRO=1") # GStreamer plugin. set(GST_QTI_OVERLAY qtioverlay) add_library(${GST_QTI_OVERLAY} SHARED gstoverlay.cc ) target_include_directories(${GST_QTI_OVERLAY} PUBLIC ${GST_INCLUDE_DIRS} ) target_include_directories(${GST_QTI_OVERLAY} PRIVATE ${KERNEL_BUILDDIR}/usr/include ) target_link_libraries(${GST_QTI_OVERLAY} PRIVATE qmmf_overlay qtimlmeta ${GST_LIBRARIES} ${GST_ALLOC_LIBRARIES} ${GST_VIDEO_LIBRARIES} ) install( TARGETS ${GST_QTI_OVERLAY} LIBRARY DESTINATION ${GST_PLUGINS_QTI_OSS_INSTALL_LIBDIR}/gstreamer-1.0 PERMISSIONS OWNER_EXECUTE OWNER_WRITE OWNER_READ GROUP_EXECUTE GROUP_READ GROUP_EXECUTE GROUP_READ ) ================================================ FILE: qti_gst_plugins/qtioverlay/qtioverlay/config.h.in ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #define PACKAGE "@GST_PLUGINS_QTI_OSS_PACKAGE@" #define PACKAGE_VERSION "@GST_PLUGINS_QTI_OSS_VERSION@" #define PACKAGE_LICENSE "@GST_PLUGINS_QTI_OSS_LICENSE@" #define PACKAGE_SUMMARY "@GST_PLUGINS_QTI_OSS_SUMMARY@" #define PACKAGE_ORIGIN "@GST_PLUGINS_QTI_OSS_ORIGIN@" ================================================ FILE: qti_gst_plugins/qtioverlay/qtioverlay/gstoverlay.cc ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include #include #include #include "gstoverlay.h" #define GST_CAT_DEFAULT overlay_debug GST_DEBUG_CATEGORY_STATIC (overlay_debug); #define gst_overlay_parent_class parent_class G_DEFINE_TYPE (GstOverlay, gst_overlay, GST_TYPE_VIDEO_FILTER); #undef GST_VIDEO_SIZE_RANGE #define GST_VIDEO_SIZE_RANGE "(int) [ 1, 32767]" #define GST_VIDEO_FORMATS "{ NV12, NV21 }" #define DEFAULT_PROP_OVERLAY_TEXT NULL #define DEFAULT_PROP_OVERLAY_DATE NULL #define DEFAULT_PROP_OVERLAY_SIMG NULL #define DEFAULT_PROP_OVERLAY_BBOX NULL #define DEFAULT_PROP_OVERLAY_META_COLOR false #define DEFAULT_PROP_OVERLAY_BBOX_COLOR kColorBlue #define DEFAULT_PROP_OVERLAY_DATE_COLOR kColorRed #define DEFAULT_PROP_OVERLAY_TEXT_COLOR kColorYellow #define DEFAULT_PROP_OVERLAY_POSE_COLOR kColorLightGreen #define DEFAULT_PROP_OVERLAY_MASK_COLOR kColorDarkGray #define DEFAULT_PROP_DEST_RECT_X 40 #define DEFAULT_PROP_DEST_RECT_Y 40 #define DEFAULT_PROP_DEST_RECT_WIDTH 200 #define DEFAULT_PROP_DEST_RECT_HEIGHT 40 /* This is initial value. Size is recalculated runtime and buffer is * reallocated runtime. */ #define GST_OVERLAY_TO_STRING_SIZE 256 #define GST_OVERLAY_TEXT_STRING_SIZE 80 #define GST_OVERLAY_DATE_STRING_SIZE 100 #define GST_OVERLAY_SIMG_STRING_SIZE 100 #define GST_OVERLAY_BBOX_STRING_SIZE 80 #define GST_OVERLAY_MASK_STRING_SIZE 100 #define GST_OVERLAY_UNUSED(var) ((void)var) static GstMLKeyPointsType PoseChain [][2] { {LEFT_SHOULDER, RIGHT_SHOULDER}, {LEFT_SHOULDER, LEFT_ELBOW}, {LEFT_SHOULDER, LEFT_HIP}, {RIGHT_SHOULDER, RIGHT_ELBOW}, {RIGHT_SHOULDER, RIGHT_HIP}, {LEFT_ELBOW, LEFT_WRIST}, {RIGHT_ELBOW, RIGHT_WRIST}, {LEFT_HIP, RIGHT_HIP}, {LEFT_HIP, LEFT_KNEE}, {RIGHT_HIP, RIGHT_KNEE}, {LEFT_KNEE, LEFT_ANKLE}, {RIGHT_KNEE, RIGHT_ANKLE} }; /* Supported GST properties * PROP_OVERLAY_TEXT - overlays user defined text * PROP_OVERLAY_DATE - overlays date and time * PROP_OVERLAY_SIMG - overlays static image * PROP_OVERLAY_BBOX - overlays bounding box * PROP_OVERLAY_MASK - overlays privacy mask * PROP_OVERLAY_META_COLOR - Use color from meta data * PROP_OVERLAY_BBOX_COLOR - ML Detection color * PROP_OVERLAY_DATE_COLOR - ML Time and Date color * PROP_OVERLAY_TEXT_COLOR - ML Classification color * PROP_OVERLAY_POSE_COLOR - ML PoseNet color * PROP_OVERLAY_TEXT_DEST_RECT - ML Classification destination rectangle */ enum { PROP_0, PROP_OVERLAY_TEXT, PROP_OVERLAY_DATE, PROP_OVERLAY_SIMG, PROP_OVERLAY_BBOX, PROP_OVERLAY_MASK, PROP_OVERLAY_META_COLOR, PROP_OVERLAY_BBOX_COLOR, PROP_OVERLAY_DATE_COLOR, PROP_OVERLAY_TEXT_COLOR, PROP_OVERLAY_POSE_COLOR, PROP_OVERLAY_TEXT_DEST_RECT }; static GstStaticCaps gst_overlay_format_caps = GST_STATIC_CAPS (GST_VIDEO_CAPS_MAKE (GST_VIDEO_FORMATS) ";" GST_VIDEO_CAPS_MAKE_WITH_FEATURES ("ANY", GST_VIDEO_FORMATS)); /** * GstOverlayMetaApplyFunc: * @gst_overlay: context * @meta: metadata payload * @item_id: overlay item instance id * * API for overlay configuration by metadata. */ typedef gboolean (* GstOverlayMetaApplyFunc) (GstOverlay *gst_overlay, gpointer meta, uint32_t * item_id); /** * GstOverlaySetFunc: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * API for overlay configuration by GST property. */ typedef gboolean (* GstOverlaySetFunc) (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist); /** * GstOverlayGetFunc: * @data: input structure * @user_data: output string * * API for quering overlay configuration by GST property. */ typedef void (* GstOverlayGetFunc) (gpointer data, gpointer user_data); /** * gst_overlay_caps: * * Expose overlay pads capabilities. */ static GstCaps * gst_overlay_caps (void) { static GstCaps *caps = NULL; static volatile gsize inited = 0; if (g_once_init_enter (&inited)) { caps = gst_static_caps_get (&gst_overlay_format_caps); g_once_init_leave (&inited, 1); } return caps; } /** * gst_overlay_src_template: * * Expose overlay source pads capabilities. */ static GstPadTemplate * gst_overlay_src_template (void) { return gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS, gst_overlay_caps ()); } /** * gst_overlay_sink_template: * * Expose overlay sink pads capabilities. */ static GstPadTemplate * gst_overlay_sink_template (void) { return gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS, gst_overlay_caps ()); } /** * gst_overlay_destroy_overlay_item: * @data: pointer to overlay item instance id * @user_data: context * * Destroy overlay instance and reset overlay instance id. */ static void gst_overlay_destroy_overlay_item (gpointer data, gpointer user_data) { uint32_t *item_id = (uint32_t *) data; Overlay *overlay = (Overlay *) user_data; int32_t ret = overlay->DisableOverlayItem (*item_id); if (ret != 0) { GST_ERROR ("Overlay %d disable failed!", *item_id); } ret = overlay->DeleteOverlayItem (*item_id); if (ret != 0) { GST_ERROR ("Overlay %d delete failed!", *item_id); } *item_id = 0; } /** * gst_overlay_apply_item_list: * @gst_overlay: context * @meta_list: List of metadata entries * @apply_func: overlay configuration API. Converts metadata to overlay * configuration and applies it * @ov_id: overlay item instance id handlers * * Iterates list of metadata entries and call provided overlay configuration * API for each of them. Overlay instances ids are also managed by this * function. * * Return true if succeed. */ static gboolean gst_overlay_apply_item_list (GstOverlay *gst_overlay, GSList * meta_list, GstOverlayMetaApplyFunc apply_func, GSequence * ov_id) { gboolean res = TRUE; guint meta_num = g_slist_length (meta_list); if (meta_num) { for (uint32_t i = g_sequence_get_length (ov_id); i < meta_num; i++) { g_sequence_append(ov_id, calloc(1, sizeof(uint32_t))); } for (uint32_t i = 0; i < meta_num; i++) { res = apply_func (gst_overlay, g_slist_nth_data (meta_list, 0), (uint32_t *) g_sequence_get (g_sequence_get_iter_at_pos (ov_id, i))); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed!"); return res; } meta_list = meta_list->next; } } if ((guint) g_sequence_get_length (ov_id) > meta_num) { g_sequence_foreach_range ( g_sequence_get_iter_at_pos (ov_id, meta_num), g_sequence_get_end_iter (ov_id), gst_overlay_destroy_overlay_item, gst_overlay->overlay); g_sequence_remove_range ( g_sequence_get_iter_at_pos (ov_id, meta_num), g_sequence_get_end_iter (ov_id)); } return TRUE; } /** * gst_overlay_apply_bbox_item: * @gst_overlay: context * @bbox: bounding box rectangle * @label: bounding box label * @color: text overlay * @item_id: pointer to overlay item instance id * * Configures and enables bounding box overlay instance. * * Return true if succeed. */ static gboolean gst_overlay_apply_bbox_item (GstOverlay * gst_overlay, GstVideoRectangle * bbox, gchar * label, guint color, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (bbox != NULL, FALSE); g_return_val_if_fail (label != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kBoundingBox; ov_param.location = OverlayLocationType::kTopLeft; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.color = color; ov_param.dst_rect.start_x = bbox->x; ov_param.dst_rect.start_y = bbox->y; ov_param.dst_rect.width = bbox->w; ov_param.dst_rect.height = bbox->h; if (sizeof (ov_param.bounding_box.box_name) <= strlen (label)) { GST_ERROR_OBJECT (gst_overlay, "Text size exceeded %d <= %d", sizeof (ov_param.bounding_box.box_name), strlen (label)); return FALSE; } g_strlcpy (ov_param.bounding_box.box_name, label, sizeof (ov_param.bounding_box.box_name)); if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_ml_bbox_item: * @gst_overlay: context * @metadata: machine learning metadata entry * @item_id: pointer to overlay item instance id * * Converts GstMLDetectionMeta metadata to overlay configuration and applies it * as bounding box overlay. * * Return true if succeed. */ static gboolean gst_overlay_apply_ml_bbox_item (GstOverlay * gst_overlay, gpointer metadata, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (metadata != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); GstMLDetectionMeta * meta = (GstMLDetectionMeta *) metadata; GstMLClassificationResult * result = (GstMLClassificationResult *) g_slist_nth_data (meta->box_info, 0); GstVideoRectangle bbox; bbox.x = meta->bounding_box.x; bbox.y = meta->bounding_box.y; bbox.w = meta->bounding_box.width; bbox.h = meta->bounding_box.height; if (gst_overlay->meta_color) gst_overlay->bbox_color = meta->bbox_color; return gst_overlay_apply_bbox_item (gst_overlay, &bbox, result->name, gst_overlay->bbox_color, item_id); } /** * gst_overlay_apply_user_bbox_item: * @data: context * @user_data: overlay configuration of GstOverlayUsrBBox type * * Configures text overlay instance with user provided configuration * and enables it. */ static void gst_overlay_apply_user_bbox_item (gpointer data, gpointer user_data) { g_return_if_fail (data != NULL); g_return_if_fail (user_data != NULL); GstOverlay * gst_overlay = (GstOverlay *) user_data; GstOverlayUsrBBox * ov_data = (GstOverlayUsrBBox *) data; if (!ov_data->base.is_applied) { gboolean res = gst_overlay_apply_bbox_item (gst_overlay, &ov_data->boundind_box, ov_data->label, ov_data->color, &ov_data->base.item_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "User overlay apply failed!"); return; } ov_data->base.is_applied = TRUE; } } /** * gst_overlay_apply_simg_item: * @gst_overlay: context * @img_buffer: pointer to image buffer * @img_size: image buffer size * @src_rect: represent image dimension in buffer * @dst_rect: render destination rectangle in video stream * @item_id: pointer to overlay item instance id * * Configures and enables static image overlay instance. * * Return true if succeed. */ static gboolean gst_overlay_apply_simg_item (GstOverlay *gst_overlay, gpointer img_buffer, guint img_size, GstVideoRectangle *src_rect, GstVideoRectangle *dst_rect, uint32_t *item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (img_buffer != NULL, FALSE); g_return_val_if_fail (src_rect != NULL, FALSE); g_return_val_if_fail (dst_rect != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kStaticImage; ov_param.image_info.image_type = OverlayImageType::kBlobType; ov_param.location = OverlayLocationType::kRandom; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.dst_rect.start_x = dst_rect->x; ov_param.dst_rect.start_y = dst_rect->y; ov_param.dst_rect.width = dst_rect->w; ov_param.dst_rect.height = dst_rect->h; ov_param.image_info.source_rect.start_x = src_rect->x; ov_param.image_info.source_rect.start_y = src_rect->y; ov_param.image_info.source_rect.width = src_rect->w; ov_param.image_info.source_rect.height = src_rect->h; ov_param.image_info.image_buffer = (char *)img_buffer; ov_param.image_info.image_size = img_size; ov_param.image_info.buffer_updated = true; if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_ml_simg_item: * @gst_overlay: context * @metadata: machine learning metadata entry * @item_id: pointer to overlay item instance id * * Converts GstMLSegmentationMeta metadata to overlay configuration and applies * it as static image overlay. * * Return true if succeed. */ static gboolean gst_overlay_apply_ml_simg_item (GstOverlay *gst_overlay, gpointer metadata, uint32_t *item_id) { g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (metadata != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); GstMLSegmentationMeta *meta = (GstMLSegmentationMeta *) metadata; GstVideoRectangle dst_rect; dst_rect.x = 0; dst_rect.y = 0; dst_rect.w = gst_overlay->width; dst_rect.h = gst_overlay->height; GstVideoRectangle src_rect; src_rect.x = 0; src_rect.y = 0; src_rect.w = meta->img_width; src_rect.h = meta->img_height; return gst_overlay_apply_simg_item (gst_overlay, meta->img_buffer, meta->img_size, &src_rect, &dst_rect, item_id); } /** * gst_overlay_apply_user_simg_item: * @data: context * @user_data: overlay configuration of GstOverlayUsrSImg type * * Configures static image overlay instance with user provided configuration * and enables it. */ static void gst_overlay_apply_user_simg_item (gpointer data, gpointer user_data) { g_return_if_fail (data != NULL); g_return_if_fail (user_data != NULL); GstOverlay * gst_overlay = (GstOverlay *) user_data; GstOverlayUsrSImg * ov_data = (GstOverlayUsrSImg *) data; if (!ov_data->base.is_applied) { GstVideoRectangle dst_rect; dst_rect.x = ov_data->dest_rect.x; dst_rect.y = ov_data->dest_rect.y; dst_rect.w = ov_data->dest_rect.w; dst_rect.h = ov_data->dest_rect.h; GstVideoRectangle src_rect; src_rect.x = 0; src_rect.y = 0; src_rect.w = ov_data->img_width; src_rect.h = ov_data->img_height; gboolean res = gst_overlay_apply_simg_item (gst_overlay, ov_data->img_buffer, ov_data->img_size, &src_rect, &dst_rect, &ov_data->base.item_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "User overlay apply failed!"); return; } ov_data->base.is_applied = TRUE; } } /** * gst_overlay_apply_text_item: * @gst_overlay: context * @name: overlay text * @color: text overlay * @location: render location in video stream * @dest_rect: render destination rectangle in video stream * @item_id: pointer to overlay item instance id * * Configures and enables text overlay instance. * * Return true if succeed. */ static gboolean gst_overlay_apply_text_item (GstOverlay * gst_overlay, gchar * name, guint color, OverlayLocationType location, GstVideoRectangle * dest_rect, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (name != NULL, FALSE); g_return_val_if_fail (dest_rect != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kUserText; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.color = color; ov_param.location = OverlayLocationType::kNone; ov_param.dst_rect.start_x = dest_rect->x; ov_param.dst_rect.start_y = dest_rect->y; ov_param.dst_rect.width = dest_rect->w; ov_param.dst_rect.height = dest_rect->h; if (sizeof (ov_param.user_text) <= strlen (name)) { GST_ERROR_OBJECT (gst_overlay, "Text size exceeded %d <= %d", sizeof (ov_param.user_text), strlen (name)); return FALSE; } g_strlcpy (ov_param.user_text, name, sizeof (ov_param.user_text)); if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_ml_text_item: * @gst_overlay: context * @metadata: machine learning metadata entry * @item_id: pointer to overlay item instance id * * Converts GstMLClassificationMeta metadata to overlay configuration and * applies it as text overlay. * * Return true if succeed. */ static gboolean gst_overlay_apply_ml_text_item (GstOverlay *gst_overlay, gpointer metadata, uint32_t * item_id) { g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (metadata != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); GstMLClassificationMeta * meta = (GstMLClassificationMeta *) metadata; return gst_overlay_apply_text_item (gst_overlay, meta->result.name, gst_overlay->text_color, OverlayLocationType::kTopLeft, &gst_overlay->text_dest_rect, item_id); } /** * gst_overlay_apply_user_text_item: * @data: context * @user_data: overlay configuration of GstOverlayUsrText type * * Configures text overlay instance with user provided configuration * and enables it. */ static void gst_overlay_apply_user_text_item (gpointer data, gpointer user_data) { g_return_if_fail (data != NULL); g_return_if_fail (user_data != NULL); GstOverlay * gst_overlay = (GstOverlay *) user_data; GstOverlayUsrText * ov_data = (GstOverlayUsrText *) data; if (!ov_data->base.is_applied) { gboolean res = gst_overlay_apply_text_item (gst_overlay, ov_data->text, ov_data->color, OverlayLocationType::kNone, &ov_data->dest_rect, &ov_data->base.item_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "User overlay apply failed!"); return; } ov_data->base.is_applied = TRUE; } } /** * gst_overlay_apply_ml_pose_item: * @gst_overlay: context * @metadata: machine learning metadata entry * @item_id: pointer to overlay item instance id * * Converts GstMLPoseNetMeta metadata to overlay configuration and applies * it as graph overlay. * * Return true if succeed. */ static gboolean gst_overlay_apply_ml_pose_item (GstOverlay *gst_overlay, gpointer metadata, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (metadata != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); GstMLPoseNetMeta * pose = (GstMLPoseNetMeta *) metadata; static float kScoreTreshold = 0.1; if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kGraph; ov_param.color = gst_overlay->pose_color; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.dst_rect.start_x = 0; ov_param.dst_rect.start_y = 0; ov_param.dst_rect.width = gst_overlay->width; ov_param.dst_rect.height = gst_overlay->height; gint count = 0; gint points[KEY_POINTS_COUNT]; if (pose->points[NOSE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[NOSE].x; ov_param.graph.points[count].y = pose->points[NOSE].y; points[NOSE] = count; count++; } if (pose->points[LEFT_EYE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_EYE].x; ov_param.graph.points[count].y = pose->points[LEFT_EYE].y; points[LEFT_EYE] = count; count++; } if (pose->points[RIGHT_EYE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_EYE].x; ov_param.graph.points[count].y = pose->points[RIGHT_EYE].y; points[RIGHT_EYE] = count; count++; } if (pose->points[LEFT_EAR].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_EAR].x; ov_param.graph.points[count].y = pose->points[LEFT_EAR].y; points[LEFT_EAR] = count; count++; } if (pose->points[RIGHT_EAR].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_EAR].x; ov_param.graph.points[count].y = pose->points[RIGHT_EAR].y; points[RIGHT_EAR] = count; count++; } if (pose->points[LEFT_SHOULDER].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_SHOULDER].x; ov_param.graph.points[count].y = pose->points[LEFT_SHOULDER].y; points[LEFT_SHOULDER] = count; count++; } if (pose->points[RIGHT_SHOULDER].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_SHOULDER].x; ov_param.graph.points[count].y = pose->points[RIGHT_SHOULDER].y; points[RIGHT_SHOULDER] = count; count++; } if (pose->points[LEFT_ELBOW].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_ELBOW].x; ov_param.graph.points[count].y = pose->points[LEFT_ELBOW].y; points[LEFT_ELBOW] = count; count++; } if (pose->points[RIGHT_ELBOW].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_ELBOW].x; ov_param.graph.points[count].y = pose->points[RIGHT_ELBOW].y; points[RIGHT_ELBOW] = count; count++; } if (pose->points[LEFT_WRIST].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_WRIST].x; ov_param.graph.points[count].y = pose->points[LEFT_WRIST].y; points[LEFT_WRIST] = count; count++; } if (pose->points[RIGHT_WRIST].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_WRIST].x; ov_param.graph.points[count].y = pose->points[RIGHT_WRIST].y; points[RIGHT_WRIST] = count; count++; } if (pose->points[LEFT_HIP].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_HIP].x; ov_param.graph.points[count].y = pose->points[LEFT_HIP].y; points[LEFT_HIP] = count; count++; } if (pose->points[RIGHT_HIP].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_HIP].x; ov_param.graph.points[count].y = pose->points[RIGHT_HIP].y; points[RIGHT_HIP] = count; count++; } if (pose->points[LEFT_KNEE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_KNEE].x; ov_param.graph.points[count].y = pose->points[LEFT_KNEE].y; points[LEFT_KNEE] = count; count++; } if (pose->points[RIGHT_KNEE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_KNEE].x; ov_param.graph.points[count].y = pose->points[RIGHT_KNEE].y; points[RIGHT_KNEE] = count; count++; } if (pose->points[LEFT_ANKLE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[LEFT_ANKLE].x; ov_param.graph.points[count].y = pose->points[LEFT_ANKLE].y; points[LEFT_ANKLE] = count; count++; } if (pose->points[RIGHT_ANKLE].score > kScoreTreshold) { ov_param.graph.points[count].x = pose->points[RIGHT_ANKLE].x; ov_param.graph.points[count].y = pose->points[RIGHT_ANKLE].y; points[RIGHT_ANKLE] = count; count++; } ov_param.graph.points_count = count; count = 0; ov_param.graph.chain_count = 0; for (gint i = 0; i < sizeof (PoseChain) / sizeof (PoseChain[0]); i++) { GstMLKeyPointsType point0 = PoseChain[i][0]; GstMLKeyPointsType point1 = PoseChain[i][1]; if (pose->points[point0].score > kScoreTreshold && pose->points[point1].score > kScoreTreshold) { ov_param.graph.chain[count][0] = points[point0]; ov_param.graph.chain[count][1] = points[point1]; count++; } } ov_param.graph.chain_count = count; if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_date_item: * @gst_overlay: context * @time_format: time format * @date_format: date format * @color: date and time color * @location: render location in video stream * @dest_rect: render destination rectangle in video stream * @item_id: pointer to overlay item instance id * * Configures and enables date overlay instance. * * Return true if succeed. */ static gboolean gst_overlay_apply_date_item (GstOverlay *gst_overlay, OverlayTimeFormatType time_format, OverlayDateFormatType date_format, guint color, OverlayLocationType location, GstVideoRectangle * dest_rect, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kDateType; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.color = color; ov_param.location = location; ov_param.dst_rect.start_x = dest_rect->x; ov_param.dst_rect.start_y = dest_rect->y; ov_param.dst_rect.width = dest_rect->w; ov_param.dst_rect.height = dest_rect->h; ov_param.date_time.time_format = time_format; ov_param.date_time.date_format = date_format; if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_user_date_item: * @data: context * @user_data: overlay configuration of GstOverlayUsrDate type * * Configures date overlay instance with user provided configuration * and enables it. */ static void gst_overlay_apply_user_date_item (gpointer data, gpointer user_data) { g_return_if_fail (data != NULL); g_return_if_fail (user_data != NULL); GstOverlay * gst_overlay = (GstOverlay *) user_data; GstOverlayUsrDate * ov_data = (GstOverlayUsrDate *) data; if (!ov_data->base.is_applied) { gboolean res = gst_overlay_apply_date_item (gst_overlay, ov_data->time_format, ov_data->date_format, ov_data->color, OverlayLocationType::kNone, &ov_data->dest_rect, &ov_data->base.item_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "User overlay apply failed!"); return; } ov_data->base.is_applied = TRUE; } } /** * gst_overlay_apply_mask_item: * @gst_overlay: context * @circle: circle dimensions * @rectangle: rectangle dimensions * @color: privacy mask color * @dest_rect: render destination rectangle in video stream * @item_id: pointer to overlay item instance id * * Configures and enables privacy mask overlay instance. * * Return true if succeed. */ static gboolean gst_overlay_apply_mask_item (GstOverlay * gst_overlay, OverlayPrivacyMaskType type, Overlaycircle *circle, OverlayRect *rectangle, guint color, GstVideoRectangle * dest_rect, uint32_t * item_id) { OverlayParam ov_param; int32_t ret = 0; g_return_val_if_fail (gst_overlay != NULL, FALSE); g_return_val_if_fail (circle != NULL, FALSE); g_return_val_if_fail (rectangle != NULL, FALSE); g_return_val_if_fail (dest_rect != NULL, FALSE); g_return_val_if_fail (item_id != NULL, FALSE); if (!(*item_id)) { ov_param = {}; ov_param.type = OverlayType::kPrivacyMask; } else { ret = gst_overlay->overlay->GetOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay get param failed! ret: %d", ret); return FALSE; } } ov_param.color = color; ov_param.location = OverlayLocationType::kNone; ov_param.dst_rect.start_x = dest_rect->x; ov_param.dst_rect.start_y = dest_rect->y; ov_param.dst_rect.width = dest_rect->w; ov_param.dst_rect.height = dest_rect->h; ov_param.privacy_mask.type = type; if (type == OverlayPrivacyMaskType::kInverseRectangle || type == OverlayPrivacyMaskType::kRectangle) { ov_param.privacy_mask.rectangle = *rectangle; } else { ov_param.privacy_mask.circle = *circle; } if (!(*item_id)) { ret = gst_overlay->overlay->CreateOverlayItem (ov_param, item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay create failed! ret: %d", ret); return FALSE; } ret = gst_overlay->overlay->EnableOverlayItem (*item_id); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay enable failed! ret: %d", ret); return FALSE; } } else { ret = gst_overlay->overlay->UpdateOverlayParams (*item_id, ov_param); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay set param failed! ret: %d", ret); return FALSE; } } return TRUE; } /** * gst_overlay_apply_user_mask_item: * @data: context * @user_data: overlay configuration of GstOverlayUsrMask type * * Configures privacy mask overlay instace with user provided configuration * and enables it. */ static void gst_overlay_apply_user_mask_item (gpointer data, gpointer user_data) { g_return_if_fail (data != NULL); g_return_if_fail (user_data != NULL); GstOverlay * gst_overlay = (GstOverlay *) user_data; GstOverlayUsrMask * ov_data = (GstOverlayUsrMask *) data; if (!ov_data->base.is_applied) { gboolean res = gst_overlay_apply_mask_item (gst_overlay, ov_data->type, &ov_data->circle, &ov_data->rectangle, ov_data->color, &ov_data->dest_rect, &ov_data->base.item_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "User overlay apply failed!"); return; } ov_data->base.is_applied = TRUE; } } /** * gst_overlay_apply_overlay: * @gst_overlay: context * @frame: GST video frame * * Renders all created overlay instances on top of a video frame. * * Return true if succeed. */ static gboolean gst_overlay_apply_overlay (GstOverlay *gst_overlay, GstVideoFrame *frame) { int32_t ret; GstMemory *memory = gst_buffer_peek_memory (frame->buffer, 0); guint fd = gst_fd_memory_get_fd (memory); OverlayTargetBuffer overlay_buf; overlay_buf.width = GST_VIDEO_FRAME_WIDTH (frame); overlay_buf.height = GST_VIDEO_FRAME_HEIGHT (frame); overlay_buf.ion_fd = fd; overlay_buf.frame_len = gst_buffer_get_size (frame->buffer); overlay_buf.format = gst_overlay->format; ret = gst_overlay->overlay->ApplyOverlay (overlay_buf); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply failed!"); return FALSE; } return TRUE; } /** * gst_overlay_set_text_overlay: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * This function parses user overlay configuration. Function is called when * overlay is configured by GST property. * * Return true if succeed. */ static gboolean gst_overlay_set_text_overlay (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist) { GstOverlayUsrText * text_entry = (GstOverlayUsrText *) entry; gboolean color_set = FALSE; gboolean entry_valid = FALSE; for (gint idx = 0; idx < gst_structure_n_fields (structure); ++idx) { const gchar *name = gst_structure_nth_field_name (structure, idx); const GValue *value = NULL; value = gst_structure_get_value (structure, name); if (!g_strcmp0 (name, "text") && G_VALUE_HOLDS (value, G_TYPE_STRING)) { text_entry->text = g_strdup (g_value_get_string (value)); if (strlen (text_entry->text) > 0) { entry_valid = TRUE; } else { GST_INFO ("String is empty. Stop overlay if exist"); free (text_entry->text); return FALSE; } } if (!g_strcmp0 (name, "color")) { if (G_VALUE_HOLDS (value, G_TYPE_UINT)) { text_entry->color = g_value_get_uint (value); color_set = TRUE; } if (G_VALUE_HOLDS (value, G_TYPE_INT)) { text_entry->color = (guint)g_value_get_int (value); color_set = TRUE; } } if (!g_strcmp0 (name, "dest-rect") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { text_entry->dest_rect.x = g_value_get_int (gst_value_array_get_value (value, 0)); text_entry->dest_rect.y = g_value_get_int (gst_value_array_get_value (value, 1)); text_entry->dest_rect.w = g_value_get_int (gst_value_array_get_value (value, 2)); text_entry->dest_rect.h = g_value_get_int (gst_value_array_get_value (value, 3)); } } if (!color_set && entry_valid && !entry_exist) { text_entry->color = DEFAULT_PROP_OVERLAY_TEXT_COLOR; } return entry_valid; } /** * gst_overlay_set_date_overlay: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * This function parses user overlay configuration. Function is called when * overlay is configured by GST property. * * Return true if succeed. */ static gboolean gst_overlay_set_date_overlay (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist) { GstOverlayUsrDate * date_entry = (GstOverlayUsrDate *) entry; gboolean color_set = FALSE; gboolean entry_valid = FALSE; gboolean date_valid = FALSE; gboolean time_valid = FALSE; for (gint idx = 0; idx < gst_structure_n_fields (structure); ++idx) { const gchar *name = gst_structure_nth_field_name (structure, idx); const GValue *value = NULL; value = gst_structure_get_value (structure, name); if (!g_strcmp0(name, "date-format") && G_VALUE_HOLDS (value, G_TYPE_STRING)) { if (!g_strcmp0 (g_value_get_string (value), "YYYYMMDD")) { date_entry->date_format = OverlayDateFormatType::kYYYYMMDD; } else if (!g_strcmp0 (g_value_get_string (value), "MMDDYYYY")) { date_entry->date_format = OverlayDateFormatType::kMMDDYYYY; } else { GST_ERROR ("Unsupported date format %s", g_value_get_string (value)); return FALSE; } date_valid = TRUE; } if (!g_strcmp0(name, "time-format") && G_VALUE_HOLDS (value, G_TYPE_STRING)) { if (!g_strcmp0 (g_value_get_string (value), "HHMMSS_24HR")) { date_entry->time_format = OverlayTimeFormatType::kHHMMSS_24HR; } else if (!g_strcmp0 (g_value_get_string (value), "HHMMSS_AMPM")) { date_entry->time_format = OverlayTimeFormatType::kHHMMSS_AMPM; } else if (!g_strcmp0 (g_value_get_string (value), "HHMM_24HR")) { date_entry->time_format = OverlayTimeFormatType::kHHMM_24HR; } else if (!g_strcmp0 (g_value_get_string (value), "HHMM_AMPM")) { date_entry->time_format = OverlayTimeFormatType::kHHMM_AMPM; } else { GST_ERROR ("Unsupported time format %s", g_value_get_string (value)); return FALSE; } time_valid = TRUE; } if (!g_strcmp0 (name, "color")) { if (G_VALUE_HOLDS (value, G_TYPE_UINT)) { date_entry->color = g_value_get_uint (value); color_set = TRUE; } if (G_VALUE_HOLDS (value, G_TYPE_INT)) { date_entry->color = (guint)g_value_get_int (value); color_set = TRUE; } } if (!g_strcmp0 (name, "dest-rect") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { date_entry->dest_rect.x = g_value_get_int (gst_value_array_get_value (value, 0)); date_entry->dest_rect.y = g_value_get_int (gst_value_array_get_value (value, 1)); date_entry->dest_rect.w = g_value_get_int (gst_value_array_get_value (value, 2)); date_entry->dest_rect.h = g_value_get_int (gst_value_array_get_value (value, 3)); } } entry_valid = date_valid && time_valid; if (!color_set && entry_valid && !entry_exist) { date_entry->color = DEFAULT_PROP_OVERLAY_DATE_COLOR; } return entry_valid; } /** * gst_overlay_set_simg_overlay: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * This function parses user overlay configuration. Function is called when * overlay is configured by GST property. * * Return true if succeed. */ static gboolean gst_overlay_set_simg_overlay (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist) { GstOverlayUsrSImg * simg_entry = (GstOverlayUsrSImg *) entry; gboolean entry_valid = FALSE; gboolean image_valid = FALSE; gboolean resolution_valid = FALSE; for (gint idx = 0; idx < gst_structure_n_fields (structure); ++idx) { const gchar *name = gst_structure_nth_field_name (structure, idx); const GValue *value = NULL; value = gst_structure_get_value (structure, name); if (!g_strcmp0 (name, "image") && G_VALUE_HOLDS (value, G_TYPE_STRING)) { simg_entry->img_file = g_strdup (g_value_get_string (value)); if (!strlen (simg_entry->img_file)) { GST_INFO ("String is empty. Stop overlay if exist"); break; } if (!g_file_test (simg_entry->img_file, G_FILE_TEST_IS_REGULAR)) { GST_INFO ("File %s does not exist", simg_entry->img_file); break; } // free previous buffer in case of reconfiguration if (entry_exist && simg_entry->img_buffer) { free (simg_entry->img_buffer); simg_entry->img_buffer = NULL; simg_entry->img_size = 0; } GError *error = NULL; gboolean ret = g_file_get_contents (simg_entry->img_file, &simg_entry->img_buffer, &simg_entry->img_size, &error); if (!ret) { GST_INFO ("Failed to get image file content, error: %s!", GST_STR_NULL (error->message)); g_clear_error (&error); break; } image_valid = TRUE; } if (!g_strcmp0 (name, "resolution") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 2) { simg_entry->img_width = g_value_get_int (gst_value_array_get_value (value, 0)); simg_entry->img_height = g_value_get_int (gst_value_array_get_value (value, 1)); if (simg_entry->img_width == 0 || simg_entry->img_height == 0) { GST_INFO ("Invalid image resolution %dx%d!", simg_entry->img_width, simg_entry->img_height); break; } resolution_valid = TRUE; } if (!g_strcmp0 (name, "dest-rect") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { simg_entry->dest_rect.x = g_value_get_int (gst_value_array_get_value (value, 0)); simg_entry->dest_rect.y = g_value_get_int (gst_value_array_get_value (value, 1)); simg_entry->dest_rect.w = g_value_get_int (gst_value_array_get_value (value, 2)); simg_entry->dest_rect.h = g_value_get_int (gst_value_array_get_value (value, 3)); } } entry_valid = image_valid && resolution_valid; if (!entry_valid && !entry_exist) { // Clean up if entry is not valid and does not exist. If entry exists but // it is not valid than entry will be stoped and release handle will take // care of cleaning up. if (simg_entry->img_file) { free (simg_entry->img_file); } if (simg_entry->img_buffer) { free (simg_entry->img_buffer); } } return entry_valid; } /** * gst_overlay_set_bbox_overlay: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * This function parses user overlay configuration. Function is called when * overlay is configured by GST property. * * Return true if succeed. */ static gboolean gst_overlay_set_bbox_overlay (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist) { GstOverlayUsrBBox * bbox_entry = (GstOverlayUsrBBox *) entry; gboolean color_set = FALSE; gboolean entry_valid = FALSE; gboolean bbox_valid = FALSE; gboolean label_valid = FALSE; for (gint idx = 0; idx < gst_structure_n_fields (structure); ++idx) { const gchar *name = gst_structure_nth_field_name (structure, idx); const GValue *value = NULL; value = gst_structure_get_value (structure, name); if (!g_strcmp0 (name, "bbox") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { bbox_entry->boundind_box.x = g_value_get_int (gst_value_array_get_value (value, 0)); bbox_entry->boundind_box.y = g_value_get_int (gst_value_array_get_value (value, 1)); bbox_entry->boundind_box.w = g_value_get_int (gst_value_array_get_value (value, 2)); bbox_entry->boundind_box.h = g_value_get_int (gst_value_array_get_value (value, 3)); bbox_valid = TRUE; } if (!g_strcmp0 (name, "label") && G_VALUE_HOLDS (value, G_TYPE_STRING)) { bbox_entry->label = g_strdup (g_value_get_string (value)); if (strlen (bbox_entry->label) > 0) { label_valid = TRUE; } else { GST_INFO ("String is empty. Stop overlay if exist"); free (bbox_entry->label); return FALSE; } } if (!g_strcmp0 (name, "color")) { if (G_VALUE_HOLDS (value, G_TYPE_UINT)) { bbox_entry->color = g_value_get_uint (value); color_set = TRUE; } if (G_VALUE_HOLDS (value, G_TYPE_INT)) { bbox_entry->color = (guint)g_value_get_int (value); color_set = TRUE; } } } entry_valid = bbox_valid && label_valid; if (!color_set && entry_valid && !entry_exist) { bbox_entry->color = DEFAULT_PROP_OVERLAY_BBOX_COLOR; } return entry_valid; } /** * gst_overlay_set_mask_overlay: * @entry: result of parsed input is stored here * @structure: user input * @entry_exist: hint if entry exist or new entry. This is helpfull when * some default values are needed to be set. * * This function parses user overlay configuration. Function is called when * overlay is configured by GST property. * * Return true if succeed. */ static gboolean gst_overlay_set_mask_overlay (GstOverlayUser * entry, GstStructure * structure, gboolean entry_exist) { GstOverlayUsrMask * mask_entry = (GstOverlayUsrMask *) entry; gboolean color_set = FALSE; gboolean entry_valid = FALSE; gboolean circle_valid = FALSE; gboolean rectangle_valid = FALSE; gboolean dest_rect_valid = FALSE; gboolean inverse = FALSE; for (gint idx = 0; idx < gst_structure_n_fields (structure); ++idx) { const gchar *name = gst_structure_nth_field_name (structure, idx); const GValue *value = NULL; value = gst_structure_get_value (structure, name); if (!g_strcmp0 (name, "circle") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 3) { mask_entry->circle.center_x = g_value_get_int (gst_value_array_get_value (value, 0)); mask_entry->circle.center_y = g_value_get_int (gst_value_array_get_value (value, 1)); mask_entry->circle.radius = g_value_get_int (gst_value_array_get_value (value, 2)); circle_valid = TRUE; } if (!g_strcmp0 (name, "rectangle") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { mask_entry->rectangle.start_x = g_value_get_int (gst_value_array_get_value (value, 0)); mask_entry->rectangle.start_y = g_value_get_int (gst_value_array_get_value (value, 1)); mask_entry->rectangle.width = g_value_get_int (gst_value_array_get_value (value, 2)); mask_entry->rectangle.height = g_value_get_int (gst_value_array_get_value (value, 3)); rectangle_valid = TRUE; } if (!g_strcmp0 (name, "inverse") && G_VALUE_HOLDS (value, G_TYPE_BOOLEAN)) { inverse = g_value_get_boolean (value); } if (!g_strcmp0 (name, "color")) { if (G_VALUE_HOLDS (value, G_TYPE_UINT)) { mask_entry->color = g_value_get_uint (value); color_set = TRUE; } if (G_VALUE_HOLDS (value, G_TYPE_INT)) { mask_entry->color = (guint)g_value_get_int (value); color_set = TRUE; } } if (!g_strcmp0 (name, "dest-rect") && G_VALUE_HOLDS (value, GST_TYPE_ARRAY) && gst_value_array_get_size (value) == 4) { mask_entry->dest_rect.x = g_value_get_int (gst_value_array_get_value (value, 0)); mask_entry->dest_rect.y = g_value_get_int (gst_value_array_get_value (value, 1)); mask_entry->dest_rect.w = g_value_get_int (gst_value_array_get_value (value, 2)); mask_entry->dest_rect.h = g_value_get_int (gst_value_array_get_value (value, 3)); dest_rect_valid = TRUE; } } if (circle_valid && rectangle_valid) { GST_INFO ("circle and rectangle cannot be set in the same time"); return FALSE; } entry_valid = (circle_valid || rectangle_valid) && dest_rect_valid; if (entry_valid) { if (circle_valid && inverse) { mask_entry->type = OverlayPrivacyMaskType::kInverseCircle; } else if (circle_valid) { mask_entry->type = OverlayPrivacyMaskType::kCircle; } else if (rectangle_valid && inverse) { mask_entry->type = OverlayPrivacyMaskType::kInverseRectangle; } else if (rectangle_valid) { mask_entry->type = OverlayPrivacyMaskType::kRectangle; } else { GST_INFO ("Error cannot find privacy mask type!"); return FALSE; } if (!color_set && !entry_exist) { mask_entry->color = DEFAULT_PROP_OVERLAY_MASK_COLOR; } } return entry_valid; } /** * gst_overlay_compare_overlay_id: * @a: a value * @b: a value to compare with * * Compares two user overlay instance ids. * * Return 0 if match. */ static gint gst_overlay_compare_overlay_id (gconstpointer a, gconstpointer b, gpointer) { return g_strcmp0 (((const GstOverlayUser *) a)->user_id, ((const GstOverlayUser *) b)->user_id); } /** * gst_overlay_set_user_overlay: * @gst_overlay: context * @user_ov: pointer to GSequence of user overlays of the same type * @entry_size: size of configuration structure of the overlay type * @set_func: function which set overlay type specific parameters * @value: input configuration which comes from GST property * * The generic setter for all user overlays. This function parse input string * to GstStructure. Check if overlay instance already exists. Creates new if * does not exist otherwise updates existing one. If mandatory parameters are * not provided overlay instance is distoried. set_func is use to set * overlay specific parameters. */ static void gst_overlay_set_user_overlay (GstOverlay *gst_overlay, GSequence * user_ov, guint entry_size, GstOverlaySetFunc set_func, const GValue * value) { const gchar *input = g_value_get_string (value); if (!input) { GST_WARNING ("Empty input. Default value or invalid user input."); return; } GValue gvalue = G_VALUE_INIT; g_value_init (&gvalue, GST_TYPE_STRUCTURE); gboolean success = gst_value_deserialize (&gvalue, input); if (!success) { GST_WARNING ("Failed to deserialize text overlay input <%s>", input); return; } GstStructure *structure = GST_STRUCTURE (g_value_dup_boxed (&gvalue)); g_value_unset (&gvalue); gboolean entry_valid = FALSE; gboolean entry_exist = FALSE; gchar *ov_id = (gchar *) gst_structure_get_name(structure); g_mutex_lock (&gst_overlay->lock); GstOverlayUser * entry = NULL; GstOverlayUser lookup; lookup.user_id = ov_id; GSequenceIter * iter = g_sequence_lookup (user_ov, &lookup, gst_overlay_compare_overlay_id, NULL); if (iter) { entry = (GstOverlayUser *) g_sequence_get (iter); entry_exist = TRUE; } else { entry = (GstOverlayUser *) calloc (1, entry_size); if (!entry) { GST_ERROR("failed to allocate memory for new entry"); g_mutex_unlock (&gst_overlay->lock); return; } entry->user_data = gst_overlay; } entry_valid = set_func (entry, structure, entry_exist); gst_structure_free (structure); if (entry_valid && entry_exist) { entry->is_applied = FALSE; } else if (entry_valid) { entry->user_id = g_strdup (ov_id); g_sequence_insert_sorted (user_ov, entry, gst_overlay_compare_overlay_id, NULL); } else if (entry_exist) { g_sequence_remove (iter); } else { free (entry); } g_mutex_unlock (&gst_overlay->lock); } /** * gst_overlay_text_overlay_to_string: * @data: user text overlay entry of GstOverlayUsrText type * @user_data: output string of GstOverlayString type * * Converts text overlay configuration to string. */ static void gst_overlay_text_overlay_to_string (gpointer data, gpointer user_data) { GstOverlayUsrText * ov_data = (GstOverlayUsrText *) data; GstOverlayString * output = (GstOverlayString *) user_data; gint size = GST_OVERLAY_TEXT_STRING_SIZE + strlen (ov_data->base.user_id) + strlen (ov_data->text); gchar * tmp = (gchar *) malloc(size); if (!tmp) { GST_ERROR ("%s: failed to allocate memory", __func__); return; } gint ret = snprintf (tmp, size, "%s, text=\"%s\", color=0x%x, dest-rect=<%d, %d, %d, %d>; ", ov_data->base.user_id, ov_data->text, ov_data->color, ov_data->dest_rect.x, ov_data->dest_rect.y, ov_data->dest_rect.w, ov_data->dest_rect.h); if (ret < 0 || ret >= size) { GST_ERROR ("%s: String size %d exceed size %d", __func__, ret, size); free (tmp); return; } if (output->capacity < (strlen (output->string) + strlen (tmp))) { size = (strlen (output->string) + strlen (tmp)) * 2; output->string = (gchar *) realloc(output->string, size); if (!output->string) { GST_ERROR ("%s: Failed to reallocate memory. Size %d", __func__, size); free (tmp); return; } output->capacity = size; } g_strlcat (output->string, tmp, output->capacity); free (tmp); } /** * gst_overlay_date_overlay_to_string: * @data: user date overlay entry of GstOverlayUsrDate type * @user_data: output string of GstOverlayString type * * Converts date overlay configuration to string. */ static void gst_overlay_date_overlay_to_string (gpointer data, gpointer user_data) { GstOverlayUsrDate * ov_data = (GstOverlayUsrDate *) data; GstOverlayString * output = (GstOverlayString *) user_data; gint size = GST_OVERLAY_DATE_STRING_SIZE + strlen (ov_data->base.user_id); gchar * tmp = (gchar *) malloc (size); if (!tmp) { GST_ERROR ("%s: failed to allocate memory", __func__); return; } gchar * date_format; switch (ov_data->date_format) { case OverlayDateFormatType::kYYYYMMDD: date_format = (gchar *)"YYYYMMDD"; break; case OverlayDateFormatType::kMMDDYYYY: date_format = (gchar *)"MMDDYYYY"; break; default: GST_ERROR ("Error unsupported date format %d", (gint)ov_data->date_format); free (tmp); return; } gchar * time_format; switch (ov_data->time_format) { case OverlayTimeFormatType::kHHMMSS_24HR: time_format = (gchar *)"HHMMSS_24HR"; break; case OverlayTimeFormatType::kHHMMSS_AMPM: time_format = (gchar *)"HHMMSS_AMPM"; break; case OverlayTimeFormatType::kHHMM_24HR: time_format = (gchar *)"HHMM_24HR"; break; case OverlayTimeFormatType::kHHMM_AMPM: time_format = (gchar *)"HHMM_AMPM"; break; default: GST_ERROR ("Error unsupported time format %d", (gint)ov_data->time_format); free (tmp); return; } gint ret = snprintf (tmp, size, "%s, date-format=%s, time-format=%s, color=0x%x, dest-rect=<%d, %d, %d, %d>; ", ov_data->base.user_id, date_format, time_format, ov_data->color, ov_data->dest_rect.x, ov_data->dest_rect.y, ov_data->dest_rect.w, ov_data->dest_rect.h); if (ret < 0 || ret >= size) { GST_ERROR ("%s: String size %d exceed size %d", __func__, ret, size); free (tmp); return; } if (output->capacity < (strlen (output->string) + strlen (tmp))) { size = (strlen (output->string) + strlen (tmp)) * 2; output->string = (gchar *) realloc(output->string, size); if (!output->string) { GST_ERROR ("%s: Failed to reallocate memory. Size %d", __func__, size); free (tmp); return; } output->capacity = size; } g_strlcat (output->string, tmp, output->capacity); free (tmp); } /** * gst_overlay_simg_overlay_to_string: * @data: user static image overlay entry of GstOverlayUsrSImg type * @user_data: output string of GstOverlayString type * * Converts static image overlay configuration to string. */ static void gst_overlay_simg_overlay_to_string (gpointer data, gpointer user_data) { GstOverlayUsrSImg * ov_data = (GstOverlayUsrSImg *) data; GstOverlayString * output = (GstOverlayString *) user_data; gint size = GST_OVERLAY_SIMG_STRING_SIZE + strlen (ov_data->base.user_id) + strlen (ov_data->img_file); gchar * tmp = (gchar *) malloc(size); if (!tmp) { GST_ERROR ("%s: failed to allocate memory", __func__); return; } gint ret = snprintf (tmp, size, "%s, image=\"%s\", resolution=<%d, %d>, dest-rect=<%d, %d, %d, %d>; ", ov_data->base.user_id, ov_data->img_file, ov_data->img_width, ov_data->img_height, ov_data->dest_rect.x, ov_data->dest_rect.y, ov_data->dest_rect.w, ov_data->dest_rect.h); if (ret < 0 || ret >= size) { GST_ERROR ("%s: String size %d exceed size %d", __func__, ret, size); free (tmp); return; } if (output->capacity < (strlen (output->string) + strlen (tmp))) { size = (strlen (output->string) + strlen (tmp)) * 2; output->string = (gchar *) realloc(output->string, size); if (!output->string) { GST_ERROR ("%s: Failed to reallocate memory. Size %d", __func__, size); free (tmp); return; } output->capacity = size; } g_strlcat (output->string, tmp, output->capacity); free (tmp); } /** * gst_overlay_bbox_overlay_to_string: * @data: user text overlay entry of GstOverlayUsrBBox type * @user_data: output string of GstOverlayString type * * Converts text overlay configuration to string. */ static void gst_overlay_bbox_overlay_to_string (gpointer data, gpointer user_data) { GstOverlayUsrBBox * ov_data = (GstOverlayUsrBBox *) data; GstOverlayString * output = (GstOverlayString *) user_data; gint size = GST_OVERLAY_BBOX_STRING_SIZE + strlen (ov_data->base.user_id) + strlen (ov_data->label); gchar * tmp = (gchar *) malloc(size); if (!tmp) { GST_ERROR ("%s: failed to allocate memory", __func__); return; } gint ret = snprintf (tmp, size, "%s, bbox=<%d, %d, %d, %d>, label=\"%s\", color=0x%x; ", ov_data->base.user_id, ov_data->boundind_box.x, ov_data->boundind_box.y, ov_data->boundind_box.w, ov_data->boundind_box.h, ov_data->label, ov_data->color); if (ret < 0 || ret >= size) { GST_ERROR ("%s: String size %d exceed size %d", __func__, ret, size); free (tmp); return; } if (output->capacity < (strlen (output->string) + strlen (tmp))) { size = (strlen (output->string) + strlen (tmp)) * 2; output->string = (gchar *) realloc(output->string, size); if (!output->string) { GST_ERROR ("%s: Failed to reallocate memory. Size %d", __func__, size); free (tmp); return; } output->capacity = size; } g_strlcat (output->string, tmp, output->capacity); free (tmp); } /** * gst_overlay_mask_overlay_to_string: * @data: user text overlay entry of GstOverlayUsrMask type * @user_data: output string of GstOverlayString type * * Converts privacy mask overlay configuration to string. */ static void gst_overlay_mask_overlay_to_string (gpointer data, gpointer user_data) { GstOverlayUsrMask * ov_data = (GstOverlayUsrMask *) data; GstOverlayString * output = (GstOverlayString *) user_data; gint size = GST_OVERLAY_MASK_STRING_SIZE + strlen (ov_data->base.user_id); gchar * tmp = (gchar *) malloc(size); if (!tmp) { GST_ERROR ("%s: failed to allocate memory", __func__); return; } gint ret; if (ov_data->type == OverlayPrivacyMaskType::kRectangle || ov_data->type == OverlayPrivacyMaskType::kInverseRectangle) { ret = snprintf (tmp, size, "%s, rectangle=<%d, %d, %d, %d>, inverse=%s, color=0x%x, dest-rect=<%d, %d, %d, %d>; ", ov_data->base.user_id, ov_data->rectangle.start_x, ov_data->rectangle.start_y, ov_data->rectangle.width, ov_data->rectangle.height, ov_data->type == OverlayPrivacyMaskType::kRectangle ? "false" : "true", ov_data->color, ov_data->dest_rect.x, ov_data->dest_rect.y, ov_data->dest_rect.w, ov_data->dest_rect.h); } else { ret = snprintf (tmp, size, "%s, circle=<%d, %d, %d>, inverse=%s, color=0x%x, dest-rect=<%d, %d, %d, %d>; ", ov_data->base.user_id, ov_data->circle.center_x, ov_data->circle.center_y, ov_data->circle.radius, ov_data->type == OverlayPrivacyMaskType::kRectangle ? "false" : "true", ov_data->color, ov_data->dest_rect.x, ov_data->dest_rect.y, ov_data->dest_rect.w, ov_data->dest_rect.h); } if (ret < 0 || ret >= size) { GST_ERROR ("%s: String size %d exceed size %d", __func__, ret, size); free (tmp); return; } if (output->capacity < (strlen (output->string) + strlen (tmp))) { size = (strlen (output->string) + strlen (tmp)) * 2; output->string = (gchar *) realloc(output->string, size); if (!output->string) { GST_ERROR ("%s: Failed to reallocate memory. Size %d", __func__, size); free (tmp); return; } output->capacity = size; } g_strlcat (output->string, tmp, output->capacity); free (tmp); } /** * gst_overlay_get_user_overlay: * @gst_overlay: context * @value: output value * @user_ov: list of overlay setting of one type * @get_func: parameter features * * The generic getter for all user overlay setting. This function iterate * all overlay instances provided by user_ov paramter and converts it to * string by provided get_func function. */ static void gst_overlay_get_user_overlay (GstOverlay *gst_overlay, GValue * value, GSequence * user_ov, GstOverlayGetFunc get_func) { g_mutex_lock (&gst_overlay->lock); GstOverlayString output; output.capacity = GST_OVERLAY_TO_STRING_SIZE; output.string = (gchar *) malloc (GST_OVERLAY_TO_STRING_SIZE); if (!output.string) { GST_ERROR ("%s: failed to allocate memory", __func__); g_mutex_unlock (&gst_overlay->lock); return; } output.string[0] = '\0'; g_sequence_foreach (user_ov, get_func, &output); g_value_set_string (value, output.string); free (output.string); g_mutex_unlock (&gst_overlay->lock); } /** * gst_overlay_get_property: * @object: gst overlay object * @prop_id: GST property id * @value: value of GST property * @pspec: parameter features * * The generic setter for all properties of this type. */ static void gst_overlay_set_property (GObject * object, guint prop_id, const GValue * value, GParamSpec * pspec) { GstOverlay *gst_overlay = GST_OVERLAY (object); const gchar *propname = g_param_spec_get_name (pspec); GstState state = GST_STATE (gst_overlay); if (!OVERLAY_IS_PROPERTY_MUTABLE_IN_CURRENT_STATE(pspec, state)) { GST_WARNING ("Property '%s' change not supported in %s state!", propname, gst_element_state_get_name (state)); return; } GST_OBJECT_LOCK (gst_overlay); switch (prop_id) { case PROP_OVERLAY_TEXT: gst_overlay_set_user_overlay (gst_overlay, gst_overlay->usr_text, sizeof (GstOverlayUsrText), gst_overlay_set_text_overlay, value); break; case PROP_OVERLAY_DATE: gst_overlay_set_user_overlay (gst_overlay, gst_overlay->usr_date, sizeof (GstOverlayUsrDate), gst_overlay_set_date_overlay, value); break; case PROP_OVERLAY_SIMG: gst_overlay_set_user_overlay (gst_overlay, gst_overlay->usr_simg, sizeof (GstOverlayUsrSImg), gst_overlay_set_simg_overlay, value); break; case PROP_OVERLAY_BBOX: gst_overlay_set_user_overlay (gst_overlay, gst_overlay->usr_bbox, sizeof (GstOverlayUsrBBox), gst_overlay_set_bbox_overlay, value); break; case PROP_OVERLAY_MASK: gst_overlay_set_user_overlay (gst_overlay, gst_overlay->usr_mask, sizeof (GstOverlayUsrMask), gst_overlay_set_mask_overlay, value); break; case PROP_OVERLAY_META_COLOR: gst_overlay->meta_color = g_value_get_boolean (value); case PROP_OVERLAY_BBOX_COLOR: gst_overlay->bbox_color = g_value_get_uint (value); break; case PROP_OVERLAY_DATE_COLOR: gst_overlay->date_color = g_value_get_uint (value); break; case PROP_OVERLAY_TEXT_COLOR: gst_overlay->text_color = g_value_get_uint (value); break; case PROP_OVERLAY_POSE_COLOR: gst_overlay->pose_color = g_value_get_uint (value); break; case PROP_OVERLAY_TEXT_DEST_RECT: if (gst_value_array_get_size(value) != 4) { GST_DEBUG_OBJECT(gst_overlay, "dest-rect is not set. Use default values."); break; } gst_overlay->text_dest_rect.x = g_value_get_int(gst_value_array_get_value(value, 0)); gst_overlay->text_dest_rect.y = g_value_get_int(gst_value_array_get_value(value, 1)); gst_overlay->text_dest_rect.w = g_value_get_int(gst_value_array_get_value(value, 2)); gst_overlay->text_dest_rect.h = g_value_get_int(gst_value_array_get_value(value, 3)); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; } GST_OBJECT_UNLOCK (gst_overlay); } /** * gst_overlay_get_property: * @object: gst overlay object * @prop_id: GST property id * @value: output value * @pspec: parameter features * * The generic getter for all properties of this type. */ static void gst_overlay_get_property (GObject * object, guint prop_id, GValue * value, GParamSpec * pspec) { GstOverlay *gst_overlay = GST_OVERLAY (object); GST_OBJECT_LOCK (gst_overlay); switch (prop_id) { case PROP_OVERLAY_TEXT: gst_overlay_get_user_overlay (gst_overlay, value, gst_overlay->usr_text, gst_overlay_text_overlay_to_string); break; case PROP_OVERLAY_DATE: gst_overlay_get_user_overlay (gst_overlay, value, gst_overlay->usr_date, gst_overlay_date_overlay_to_string); break; case PROP_OVERLAY_SIMG: gst_overlay_get_user_overlay (gst_overlay, value, gst_overlay->usr_simg, gst_overlay_simg_overlay_to_string); break; case PROP_OVERLAY_BBOX: gst_overlay_get_user_overlay (gst_overlay, value, gst_overlay->usr_bbox, gst_overlay_bbox_overlay_to_string); break; case PROP_OVERLAY_MASK: gst_overlay_get_user_overlay (gst_overlay, value, gst_overlay->usr_mask, gst_overlay_mask_overlay_to_string); break; case PROP_OVERLAY_META_COLOR: g_value_set_boolean (value, gst_overlay->meta_color); break; case PROP_OVERLAY_BBOX_COLOR: g_value_set_uint (value, gst_overlay->bbox_color); break; case PROP_OVERLAY_DATE_COLOR: g_value_set_uint (value, gst_overlay->date_color); break; case PROP_OVERLAY_TEXT_COLOR: g_value_set_uint (value, gst_overlay->text_color); break; case PROP_OVERLAY_POSE_COLOR: g_value_set_uint (value, gst_overlay->pose_color); break; case PROP_OVERLAY_TEXT_DEST_RECT: { GValue val = G_VALUE_INIT; g_value_init (&val, G_TYPE_INT); g_value_set_int (&val, gst_overlay->text_dest_rect.x); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->text_dest_rect.y); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->text_dest_rect.w); gst_value_array_append_value (value, &val); g_value_set_int (&val, gst_overlay->text_dest_rect.h); gst_value_array_append_value (value, &val); break; } default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; } GST_OBJECT_UNLOCK (gst_overlay); } /** * gst_overlay_finalize: * @object: gst overlay object * * GST object finalize handler. */ static void gst_overlay_finalize (GObject * object) { GstOverlay *gst_overlay = GST_OVERLAY (object); if (gst_overlay->overlay) { g_sequence_foreach (gst_overlay->bbox_id, gst_overlay_destroy_overlay_item, gst_overlay->overlay); g_sequence_free (gst_overlay->bbox_id); g_sequence_foreach (gst_overlay->simg_id, gst_overlay_destroy_overlay_item, gst_overlay->overlay); g_sequence_free (gst_overlay->simg_id); g_sequence_foreach (gst_overlay->text_id, gst_overlay_destroy_overlay_item, gst_overlay->overlay); g_sequence_free (gst_overlay->text_id); g_sequence_foreach (gst_overlay->pose_id, gst_overlay_destroy_overlay_item, gst_overlay->overlay); g_sequence_free (gst_overlay->pose_id); g_sequence_free (gst_overlay->usr_text); g_sequence_free (gst_overlay->usr_date); g_sequence_free (gst_overlay->usr_simg); g_sequence_free (gst_overlay->usr_bbox); g_sequence_free (gst_overlay->usr_mask); delete (gst_overlay->overlay); gst_overlay->overlay = nullptr; } g_mutex_clear (&gst_overlay->lock); G_OBJECT_CLASS (parent_class)->finalize (G_OBJECT (gst_overlay)); } /** * gst_overlay_set_info: * @filter: gst overlay object * @in: negotiated sink pad capabilites * @ininfo: Information describing input image properties * @out: negotiated source pad capabilites * @outinfo: Information describing output image properties * * Function to be called with the negotiated caps and video infos. * * Return true if succeed. */ static gboolean gst_overlay_set_info (GstVideoFilter * filter, GstCaps * in, GstVideoInfo * ininfo, GstCaps * out, GstVideoInfo * outinfo) { GstOverlay *gst_overlay = GST_OVERLAY (filter); TargetBufferFormat new_format; GST_OVERLAY_UNUSED(in); GST_OVERLAY_UNUSED(out); GST_OVERLAY_UNUSED(outinfo); gst_base_transform_set_passthrough (GST_BASE_TRANSFORM (filter), FALSE); gst_overlay->width = GST_VIDEO_INFO_WIDTH (ininfo); gst_overlay->height = GST_VIDEO_INFO_HEIGHT (ininfo); switch (GST_VIDEO_INFO_FORMAT(ininfo)) { // GstVideoFormat case GST_VIDEO_FORMAT_NV12: new_format = TargetBufferFormat::kYUVNV12; break; case GST_VIDEO_FORMAT_NV21: new_format = TargetBufferFormat::kYUVNV21; break; default: GST_ERROR_OBJECT (gst_overlay, "Unhandled gst format: %d", GST_VIDEO_INFO_FORMAT (ininfo)); return FALSE; } if (gst_overlay->overlay && gst_overlay->format == new_format) { GST_DEBUG_OBJECT (gst_overlay, "Overlay already initialized"); return TRUE; } if (gst_overlay->overlay) { delete (gst_overlay->overlay); } gst_overlay->format = new_format; gst_overlay->overlay = new Overlay(); int32_t ret = gst_overlay->overlay->Init (gst_overlay->format); if (ret != 0) { GST_ERROR_OBJECT (gst_overlay, "Overlay init failed! Format: %u", (guint)gst_overlay->format); delete (gst_overlay->overlay); gst_overlay->overlay = nullptr; return FALSE; } return TRUE; } /** * gst_overlay_set_info: * @filter: gst overlay object * @frame: GST video buffer * * Apply all overlay items from machine learning metadata and user provided * overlays in video frame in place. * * Return GST_FLOW_OK if succeed otherwise GST_FLOW_ERROR. */ static GstFlowReturn gst_overlay_transform_frame_ip (GstVideoFilter *filter, GstVideoFrame *frame) { GstOverlay *gst_overlay = GST_OVERLAY_CAST (filter); gboolean res = TRUE; if (!gst_overlay->overlay) { GST_ERROR_OBJECT (gst_overlay, "failed: overlay not initialized"); return GST_FLOW_ERROR; } res = gst_overlay_apply_item_list (gst_overlay, gst_buffer_get_detection_meta (frame->buffer), gst_overlay_apply_ml_bbox_item, gst_overlay->bbox_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply bbox item list failed!"); return GST_FLOW_ERROR; } res = gst_overlay_apply_item_list (gst_overlay, gst_buffer_get_segmentation_meta (frame->buffer), gst_overlay_apply_ml_simg_item, gst_overlay->simg_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply image item list failed!"); return GST_FLOW_ERROR; } res = gst_overlay_apply_item_list (gst_overlay, gst_buffer_get_classification_meta (frame->buffer), gst_overlay_apply_ml_text_item, gst_overlay->text_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply classification item list failed!"); return GST_FLOW_ERROR; } res = gst_overlay_apply_item_list (gst_overlay, gst_buffer_get_posenet_meta (frame->buffer), gst_overlay_apply_ml_pose_item, gst_overlay->pose_id); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply pose item list failed!"); return GST_FLOW_ERROR; } g_mutex_lock (&gst_overlay->lock); g_sequence_foreach (gst_overlay->usr_text, gst_overlay_apply_user_text_item, gst_overlay); g_sequence_foreach (gst_overlay->usr_date, gst_overlay_apply_user_date_item, gst_overlay); g_sequence_foreach (gst_overlay->usr_simg, gst_overlay_apply_user_simg_item, gst_overlay); g_sequence_foreach (gst_overlay->usr_bbox, gst_overlay_apply_user_bbox_item, gst_overlay); g_sequence_foreach (gst_overlay->usr_mask, gst_overlay_apply_user_mask_item, gst_overlay); g_mutex_unlock (&gst_overlay->lock); if (!g_sequence_is_empty (gst_overlay->bbox_id) || !g_sequence_is_empty (gst_overlay->simg_id) || !g_sequence_is_empty (gst_overlay->text_id) || !g_sequence_is_empty (gst_overlay->pose_id) || !g_sequence_is_empty (gst_overlay->usr_text) || !g_sequence_is_empty (gst_overlay->usr_date) || !g_sequence_is_empty (gst_overlay->usr_simg) || !g_sequence_is_empty (gst_overlay->usr_bbox) || !g_sequence_is_empty (gst_overlay->usr_mask)) { res = gst_overlay_apply_overlay (gst_overlay, frame); if (!res) { GST_ERROR_OBJECT (gst_overlay, "Overlay apply failed!"); return GST_FLOW_ERROR; } } return GST_FLOW_OK; } /** * gst_overlay_free_user_overlay_entry: * @ptr: GstOverlayUser * * Disable overlay item and free all user overlay common data. All resources * freed in this function are allocated in gst_overlay_set_user_overlay(). */ static void gst_overlay_free_user_overlay_entry (gpointer ptr) { if (ptr) { GstOverlayUser * entry = (GstOverlayUser *) ptr; GstOverlay * gst_overlay = (GstOverlay *) entry->user_data; if (entry->item_id && gst_overlay && gst_overlay->overlay) { gst_overlay_destroy_overlay_item (&entry->item_id, gst_overlay->overlay); } free (entry->user_id); free (entry); } } /** * gst_overlay_free_user_text_entry: * @ptr: GstOverlayUsrText * * Free text user overlay data. */ static void gst_overlay_free_user_text_entry (gpointer ptr) { if (ptr) { GstOverlayUsrText * entry = (GstOverlayUsrText *) ptr; free (entry->text); gst_overlay_free_user_overlay_entry (ptr); } } /** * gst_overlay_free_user_simg_entry: * @ptr: GstOverlayUsrSImg * * Free static image user overlay data. */ static void gst_overlay_free_user_simg_entry (gpointer ptr) { if (ptr) { GstOverlayUsrSImg * entry = (GstOverlayUsrSImg *) ptr; free (entry->img_file); free (entry->img_buffer); gst_overlay_free_user_overlay_entry (ptr); } } /** * gst_overlay_free_user_bbox_entry: * @ptr: GstOverlayUsrBBox * * Free bounding box user overlay data. */ static void gst_overlay_free_user_bbox_entry (gpointer ptr) { if (ptr) { GstOverlayUsrBBox * entry = (GstOverlayUsrBBox *) ptr; free (entry->label); gst_overlay_free_user_overlay_entry (ptr); } } static void gst_overlay_init (GstOverlay * gst_overlay) { gst_overlay->overlay = nullptr; gst_overlay->bbox_id = g_sequence_new (free); gst_overlay->simg_id = g_sequence_new (free); gst_overlay->text_id = g_sequence_new (free); gst_overlay->pose_id = g_sequence_new (free); gst_overlay->usr_text = g_sequence_new (gst_overlay_free_user_text_entry); gst_overlay->usr_date = g_sequence_new (gst_overlay_free_user_overlay_entry); gst_overlay->usr_simg = g_sequence_new (gst_overlay_free_user_simg_entry); gst_overlay->usr_bbox = g_sequence_new (gst_overlay_free_user_bbox_entry); gst_overlay->usr_mask = g_sequence_new (gst_overlay_free_user_overlay_entry); gst_overlay->meta_color = DEFAULT_PROP_OVERLAY_META_COLOR; gst_overlay->bbox_color = DEFAULT_PROP_OVERLAY_BBOX_COLOR; gst_overlay->date_color = DEFAULT_PROP_OVERLAY_DATE_COLOR; gst_overlay->text_color = DEFAULT_PROP_OVERLAY_TEXT_COLOR; gst_overlay->pose_color = DEFAULT_PROP_OVERLAY_POSE_COLOR; gst_overlay->text_dest_rect.x = DEFAULT_PROP_DEST_RECT_X; gst_overlay->text_dest_rect.y = DEFAULT_PROP_DEST_RECT_Y; gst_overlay->text_dest_rect.w = DEFAULT_PROP_DEST_RECT_WIDTH; gst_overlay->text_dest_rect.h = DEFAULT_PROP_DEST_RECT_HEIGHT; g_mutex_init (&gst_overlay->lock); GST_DEBUG_CATEGORY_INIT (overlay_debug, "qtioverlay", 0, "QTI overlay"); } static void gst_overlay_class_init (GstOverlayClass * klass) { GObjectClass *gobject = G_OBJECT_CLASS (klass); GstElementClass *element = GST_ELEMENT_CLASS (klass); GstVideoFilterClass *filter = GST_VIDEO_FILTER_CLASS (klass); gobject->set_property = GST_DEBUG_FUNCPTR (gst_overlay_set_property); gobject->get_property = GST_DEBUG_FUNCPTR (gst_overlay_get_property); gobject->finalize = GST_DEBUG_FUNCPTR (gst_overlay_finalize); g_object_class_install_property (gobject, PROP_OVERLAY_TEXT, g_param_spec_string ("overlay-text", "Text Overlay", "Renders text on top of video stream.", DEFAULT_PROP_OVERLAY_TEXT, static_cast ( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); g_object_class_install_property (gobject, PROP_OVERLAY_DATE, g_param_spec_string ("overlay-date", "Date Overlay", "Renders date and time on top of video stream.", DEFAULT_PROP_OVERLAY_DATE, static_cast ( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); g_object_class_install_property (gobject, PROP_OVERLAY_SIMG, g_param_spec_string ("overlay-simg", "Static Image Overlay", "Renders static image on top of video stream.", DEFAULT_PROP_OVERLAY_DATE, static_cast ( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); g_object_class_install_property (gobject, PROP_OVERLAY_BBOX, g_param_spec_string ("overlay-bbox", "Bounding Box Overlay", "Renders bounding box and label on top of video stream.", DEFAULT_PROP_OVERLAY_TEXT, static_cast ( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); g_object_class_install_property (gobject, PROP_OVERLAY_MASK, g_param_spec_string ("overlay-mask", "Privacy Mask Overlay", "Renders privacy mask on top of video stream.", DEFAULT_PROP_OVERLAY_TEXT, static_cast ( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | GST_PARAM_MUTABLE_PLAYING))); g_object_class_install_property (gobject, PROP_OVERLAY_META_COLOR, g_param_spec_boolean ("meta-color", "Meta color", "Bounding box overlay use meta data color", DEFAULT_PROP_OVERLAY_META_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); g_object_class_install_property (gobject, PROP_OVERLAY_BBOX_COLOR, g_param_spec_uint ("bbox-color", "BBox color", "Bounding box overlay color", 0, G_MAXUINT, DEFAULT_PROP_OVERLAY_BBOX_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); g_object_class_install_property (gobject, PROP_OVERLAY_DATE_COLOR, g_param_spec_uint ("date-color", "Date color", "Date overlay color", 0, G_MAXUINT, DEFAULT_PROP_OVERLAY_DATE_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); g_object_class_install_property (gobject, PROP_OVERLAY_TEXT_COLOR, g_param_spec_uint ("text-color", "Text color", "Text overlay color", 0, G_MAXUINT, DEFAULT_PROP_OVERLAY_TEXT_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); g_object_class_install_property (gobject, PROP_OVERLAY_POSE_COLOR, g_param_spec_uint ("pose-color", "Pose color", "Pose overlay color", 0, G_MAXUINT, DEFAULT_PROP_OVERLAY_POSE_COLOR, static_cast( G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); g_object_class_install_property (gobject, PROP_OVERLAY_TEXT_DEST_RECT, gst_param_spec_array ("dest-rect-ml-text", "Destination Rectangle for ML Detection overlay", "Destination rectangle params for ML Detection overlay. " "The Start-X, Start-Y , Width, Height of the destination rectangle " "format is ", g_param_spec_int ("coord", "Coordinate", "One of X, Y, Width, Height value.", 0, G_MAXINT, 0, static_cast (G_PARAM_WRITABLE | G_PARAM_STATIC_STRINGS)), static_cast (G_PARAM_CONSTRUCT | G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS))); gst_element_class_set_static_metadata (element, "QTI Overlay", "Overlay", "This plugin renders text, image, bounding box or graph on top of a " "video stream.", "QTI"); gst_element_class_add_pad_template (element, gst_overlay_sink_template ()); gst_element_class_add_pad_template (element, gst_overlay_src_template ()); filter->set_info = GST_DEBUG_FUNCPTR (gst_overlay_set_info); filter->transform_frame_ip = GST_DEBUG_FUNCPTR (gst_overlay_transform_frame_ip); } static gboolean plugin_init (GstPlugin * plugin) { return gst_element_register (plugin, "qtioverlay", GST_RANK_PRIMARY, GST_TYPE_OVERLAY); } GST_PLUGIN_DEFINE ( GST_VERSION_MAJOR, GST_VERSION_MINOR, qtioverlay, "QTI Overlay. This plugin renders text, image, bounding box or graph on " "top of a video stream.", plugin_init, PACKAGE_VERSION, PACKAGE_LICENSE, PACKAGE_SUMMARY, PACKAGE_ORIGIN ) ================================================ FILE: qti_gst_plugins/qtioverlay/qtioverlay/gstoverlay.h ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef __GST_QTI_OVERLAY_H__ #define __GST_QTI_OVERLAY_H__ #include #include #include #include #include using namespace qmmf::overlay; G_BEGIN_DECLS #define GST_TYPE_OVERLAY \ (gst_overlay_get_type()) #define GST_OVERLAY(obj) \ (G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_OVERLAY,GstOverlay)) #define GST_OVERLAY_CLASS(klass) \ (G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_OVERLAY,GstOverlayClass)) #define GST_IS_OVERLAY(obj) \ (G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_OVERLAY)) #define GST_IS_OVERLAY_CLASS(klass) \ (G_TYPE_CHECK_CLASS_TYPE((klass),GST_TYPE_OVERLAY)) #define GST_OVERLAY_CAST(obj) ((GstOverlay *)(obj)) typedef struct _GstOverlay GstOverlay; typedef struct _GstOverlayClass GstOverlayClass; typedef struct _GstOverlayUser GstOverlayUser; typedef struct _GstOverlayUsrText GstOverlayUsrText; typedef struct _GstOverlayUsrDate GstOverlayUsrDate; typedef struct _GstOverlayUsrSImg GstOverlayUsrSImg; typedef struct _GstOverlayUsrBBox GstOverlayUsrBBox; typedef struct _GstOverlayUsrMask GstOverlayUsrMask; typedef struct _GstOverlayString GstOverlayString; struct _GstOverlay { GstVideoFilter parent; Overlay *overlay; TargetBufferFormat format; guint width; guint height; GMutex lock; gboolean meta_color; /* Machine learning overlay */ GSequence *bbox_id; GSequence *simg_id; GSequence *text_id; GSequence *pose_id; guint bbox_color; guint date_color; guint text_color; guint pose_color; GstVideoRectangle text_dest_rect; /* User overlay */ GSequence *usr_text; GSequence *usr_date; GSequence *usr_simg; GSequence *usr_bbox; GSequence *usr_mask; }; struct _GstOverlayClass { GstVideoFilterClass parent; }; /* GstOverlayUser - common parameters for all user overlays * user_id: overlay user instance id * item_id: overlay HW instacne id * is_applied: flag indicating if new configuration is applied * user_data: user pointer which is used in release handler */ struct _GstOverlayUser { gchar *user_id; guint item_id; gboolean is_applied; gpointer user_data; }; /* GstOverlayUsrText - parameters for user text overlay * base: common parameters for all user overlays * text: user text * color: overlay color * dest_rect: render destination rectangle in video stream */ struct _GstOverlayUsrText { GstOverlayUser base; gchar *text; guint color; GstVideoRectangle dest_rect; }; /* GstOverlayUsrDate - parameters for user date overlay * base: common parameters for all user overlays * date_format: date format * time_format: time format * color: overlay color * dest_rect: render destination rectangle in video stream */ struct _GstOverlayUsrDate { GstOverlayUser base; OverlayDateFormatType date_format; OverlayTimeFormatType time_format; guint color; GstVideoRectangle dest_rect; }; /* GstOverlayUsrSImg - parameters for user static image overlay * base: common parameters for all user overlays * img_file: image file name with full path * img_width: image width * img_height: image height * img_buffer: pointer to image buffer * img_size: image buffer size * dest_rect: render destination rectangle in video stream */ struct _GstOverlayUsrSImg { GstOverlayUser base; gchar *img_file; guint img_width; guint img_height; gchar *img_buffer; gsize img_size; GstVideoRectangle dest_rect; }; /* GstOverlayUsrBBox - parameters for user bounding box overlay * base: common parameters for all user overlays * label: bounding box label * boundind_box: boundind box rectangle * color: overlay color */ struct _GstOverlayUsrBBox { GstOverlayUser base; gchar *label; GstVideoRectangle boundind_box; guint color; }; /* GstOverlayUsrMask - parameters for privacy mask overlay * base: common parameters for all user overlays * type: privacy mask type * circle: circle dimensions * rectangle: rectangle dimensions * color: overlay color * dest_rect: render destination rectangle in video stream */ struct _GstOverlayUsrMask { GstOverlayUser base; OverlayPrivacyMaskType type; Overlaycircle circle; OverlayRect rectangle; guint color; GstVideoRectangle dest_rect; }; /* GstOverlayString - pair for string and capacity * string: pointer to string * capacity: size of the storage space currently allocated for the string */ struct _GstOverlayString { gchar *string; guint capacity; }; G_GNUC_INTERNAL GType gst_overlay_get_type (void); #define OVERLAY_IS_PROPERTY_MUTABLE_IN_CURRENT_STATE(pspec, state) \ ((pspec->flags & GST_PARAM_MUTABLE_PLAYING) ? (state <= GST_STATE_PLAYING) \ : ((pspec->flags & GST_PARAM_MUTABLE_PAUSED) ? (state <= GST_STATE_PAUSED) \ : ((pspec->flags & GST_PARAM_MUTABLE_READY) ? (state <= GST_STATE_READY) \ : (state <= GST_STATE_NULL)))) G_END_DECLS #endif // __GST_QTI_OVERLAY_H__ ================================================ FILE: qti_gst_plugins/qtioverlay/qtiqmmf_overlay/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 3.1) project(qmmf_overlay) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DUSE_SKIA=0 -DUSE_CAIRO=1") if (NOT OVERLAY_ENABLED) set(exclude EXCLUDE_FROM_ALL) endif() #add_definitions(-DOVERLAY_OPEN_CL_BLIT) add_library(qmmf_overlay SHARED ${exclude} ${CMAKE_CURRENT_SOURCE_DIR}/qmmf_overlay.cc ) target_include_directories(qmmf_overlay PRIVATE ${TOP_DIRECTORY}) target_include_directories(qmmf_overlay PRIVATE $) target_include_directories(qmmf_overlay PRIVATE ${TOP_DIRECTORY}/common/memory) # TODO remove this hack when camx issue with propagating c and cpp glags is solved target_include_directories(qmmf_overlay PRIVATE ${PKG_CONFIG_SYSROOT_DIR}/usr/include/ion_headers) install(TARGETS qmmf_overlay DESTINATION lib OPTIONAL) target_link_libraries(qmmf_overlay log binder pthread utils cutils dl C2D2 cairo OpenCL qmmf_utils) # TODO remove this hack when camx issue with propagating c and cpp glags is solved target_link_libraries(qmmf_overlay ion) file(GLOB_RECURSE RGB_FILES ${CMAKE_CURRENT_LIST_DIR}/raw_image/*.rgba) install(FILES ${RGB_FILES} DESTINATION ${QMMF_DATA}) install( FILES ${CMAKE_CURRENT_SOURCE_DIR}/overlay_blit_kernel.cl DESTINATION /usr/lib/qmmf ) ================================================ FILE: qti_gst_plugins/qtioverlay/qtiqmmf_overlay/overlay_blit_kernel.cl ================================================ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #define KERNEL_X_SIZE 1.0f #define KERNEL_Y_SIZE 1.0f const sampler_t smp = CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_LINEAR; kernel void overlay_cl(__read_only image2d_t mask, // 1 __global uchar *frame, // 2 uint y_offset, // 3 uint nv_offset, // 4 ushort stride, // 5 ushort swap_uv // 6 ) { uint x = get_global_id(0); uint y = get_global_id(1); // Read input yuv data uint offset = 2 * (stride * y + x); uchar2 y_out1 = *(frame + y_offset + offset); uchar2 y_out2 = *(frame + y_offset + offset + stride); offset = stride * y + x * 2; uchar2 uv_out = *(__global uchar2 *)(frame + nv_offset + offset); // Read and resize mask data float2 coord; coord.s0 = (KERNEL_X_SIZE * x) / get_global_size(0); coord.s1 = (KERNEL_Y_SIZE * y) / get_global_size(1); uchar4 mask_data = convert_uchar4(read_imageui(mask, smp, coord)); // Convert rgb to yuv float luma; float2 chroma; luma = 0.2126f * mask_data.s0 + 0.7152f * mask_data.s1 + 0.0722f * mask_data.s2; chroma.s0 = -0.09991f * mask_data.s0 - 0.33609f * mask_data.s1 + 0.436f * mask_data.s2; chroma.s1 = 0.615f * mask_data.s0 - 0.55861 * mask_data.s1 - 0.05639f * mask_data.s2; chroma += 128; if (swap_uv) { chroma.s01 = chroma.s10; } luma = clamp(luma, 0.0f, 255.0f); chroma = clamp(chroma, 0.0f, 255.0f); // Apply alpha blending float alpha = mask_data.s3 / 255.0f; y_out1 = convert_uchar2(alpha * luma + (1.0f - alpha) * convert_float2(y_out1)); y_out2 = convert_uchar2(alpha * luma + (1.0f - alpha) * convert_float2(y_out2)); uv_out = convert_uchar2(alpha * chroma + (1.0f - alpha) * convert_float2(uv_out)); // Store output yuv data offset = 2 * (stride * y + x); *(__global uchar2 *)(frame + y_offset + offset) = y_out1; *(__global uchar2 *)(frame + y_offset + offset + stride) = y_out2; offset = stride * y + x * 2; *(__global uchar2 *)(frame + nv_offset + offset) = uv_out; } ================================================ FILE: qti_gst_plugins/qtioverlay/qtiqmmf_overlay/qmmf_overlay.cc ================================================ /* * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #define LOG_TAG "Overlay" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if USE_SKIA #include #include #include #include #endif #include "common/utils/qmmf_tools.h" #include "qmmf-sdk/qmmf_overlay.h" #include "qmmf_overlay_item.h" namespace qmmf { namespace overlay { using namespace android; using namespace std; #define ROUND_TO(val, round_to) ((val + round_to - 1) & ~(round_to - 1)) #ifdef OVERLAY_OPEN_CL_BLIT cl_device_id OpenClKernel::device_id_ = nullptr; cl_context OpenClKernel::context_ = nullptr; cl_command_queue OpenClKernel::command_queue_ = nullptr; std::mutex OpenClKernel::lock_; int32_t OpenClKernel::ref_count = 0; int32_t OpenClKernel::OpenCLInit () { ref_count++; if (ref_count > 1) { return 0; } OVDBG_VERBOSE("%s: Enter ", __func__); cl_context_properties properties[] = {CL_CONTEXT_PLATFORM, 0, 0}; cl_platform_id plat = 0; cl_uint ret_num_platform = 0; cl_uint ret_num_devices = 0; cl_int cl_err; cl_err = clGetPlatformIDs(1, &plat, &ret_num_platform); if ((CL_SUCCESS != cl_err) || (ret_num_platform == 0)) { OVDBG_ERROR("%s: Open cl hw platform not available. rc %d", __func__, cl_err); return BAD_VALUE; } properties[1] = (cl_context_properties)plat; cl_err = clGetDeviceIDs(plat, CL_DEVICE_TYPE_DEFAULT, 1, &device_id_, &ret_num_devices); if ((CL_SUCCESS != cl_err) || (ret_num_devices != 1)) { OVDBG_ERROR("%s: Open cl hw device not available. rc %d", __func__, cl_err); return BAD_VALUE; } context_ = clCreateContext(properties, 1, &device_id_, NULL, NULL, &cl_err); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to create Open cl context. rc: %d", __func__, cl_err); return BAD_VALUE; } command_queue_ = clCreateCommandQueueWithProperties(context_, device_id_, 0, &cl_err); if (CL_SUCCESS != cl_err) { clReleaseContext(context_); OVDBG_ERROR("%s: Failed to create Open cl command queue. rc: %d", __func__, cl_err); return BAD_VALUE; } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } int32_t OpenClKernel::OpenCLDeInit () { ref_count--; if (ref_count > 0) { return 0; } else if (ref_count < 0) { OVDBG_ERROR("%s: Instance is already destroyed.", __func__); return -1; } OVDBG_VERBOSE("%s: Enter ", __func__); assert(context_ != nullptr); if (command_queue_) { clReleaseCommandQueue(command_queue_); command_queue_ = nullptr; } if (context_) { clReleaseContext(context_); context_ = nullptr; } if (device_id_) { clReleaseDevice(device_id_); device_id_ = nullptr; } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } /* This initializes Open CL context and command queue, loads and builds Open CL * program. This is reference instance which cannot be use by itself because * there is no kernel instance */ std::shared_ptr OpenClKernel::New(const std::string &path_to_src, const std::string &name) { std::unique_lock lock(lock_); OpenCLInit(); auto new_instance = std::shared_ptr(new OpenClKernel(name), [](void const *) { if (ref_count == 1) { OpenCLDeInit(); ref_count--; } }); auto ret = new_instance->BuildProgram(path_to_src); if (ret) { OVDBG_ERROR("%s: Failed to build blit program", __func__); return nullptr; } return new_instance; } /* This creates new instance without loading and building Open CL program. * It uses program from reference instance */ std::shared_ptr OpenClKernel::AddInstance() { std::unique_lock lock(lock_); OpenCLInit(); auto new_instance = std::shared_ptr(new OpenClKernel(*this), [this](void const *) { OpenCLDeInit(); }); new_instance->CreateKernelInstance(); return new_instance; } OpenClKernel::~OpenClKernel() { /* OpenCL program is created by reference instance which does not have * kernel instance. */ if (kernel_) { clReleaseKernel(kernel_); kernel_ = nullptr; } else if (prog_) { clReleaseProgram(prog_); prog_ = nullptr; } } int32_t OpenClKernel::BuildProgram(const std::string &path_to_src) { OVDBG_VERBOSE("%s: Enter ", __func__); assert(context_ != nullptr); if (path_to_src.empty()) { OVDBG_ERROR("%s: Invalid input source path! ", __func__); return BAD_VALUE; } std::ifstream src_file(path_to_src); if (!src_file.is_open()) { OVDBG_ERROR("%s: Fail to open source file: %s ", __func__, path_to_src.c_str()); return BAD_VALUE; } std::string kernel_src((std::istreambuf_iterator(src_file)), std::istreambuf_iterator()); cl_int cl_err; cl_int num_program_devices = 1; const char *strings[] = {kernel_src.c_str()}; const size_t length = kernel_src.size(); prog_ = clCreateProgramWithSource(context_, num_program_devices, strings, &length, &cl_err); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Fail to create CL program! ",__func__); return BAD_VALUE; } cl_err = clBuildProgram(prog_, num_program_devices, &device_id_, " -cl-fast-relaxed-math -D ARTIFACT_REMOVE ", nullptr, nullptr); if (CL_SUCCESS != cl_err) { std::string build_log = CreateCLKernelBuildLog(); OVDBG_ERROR("%s: Failed to build Open cl program. rc: %d", __func__, cl_err); OVDBG_ERROR("%s: ---------- Open cl build log ----------\n%s", __func__, build_log.c_str()); return BAD_VALUE; } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } int32_t OpenClKernel::CreateKernelInstance() { OVDBG_VERBOSE("%s: Enter ", __func__); cl_int cl_err; assert(context_ != nullptr); kernel_ = clCreateKernel(prog_, kernel_name_.c_str(), &cl_err); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to create Open cl kernel rc: %d", __func__, cl_err); return BAD_VALUE; } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } int32_t OpenClKernel::MapBuffer(cl_mem &cl_buffer, void *vaddr, int32_t fd, uint32_t size) { OVDBG_VERBOSE("%s: Enter addr %p fd %d size %d", __func__, vaddr, fd, size); cl_int rc; assert(context_ != nullptr); cl_mem_flags mem_flags = 0; mem_flags |= CL_MEM_READ_WRITE; mem_flags |= CL_MEM_USE_HOST_PTR; mem_flags |= CL_MEM_EXT_HOST_PTR_QCOM; cl_mem_ion_host_ptr ionmem {}; ionmem.ext_host_ptr.allocation_type = CL_MEM_ION_HOST_PTR_QCOM; ionmem.ext_host_ptr.host_cache_policy = CL_MEM_HOST_WRITEBACK_QCOM; ionmem.ion_hostptr = vaddr; ionmem.ion_filedesc = fd; cl_buffer = clCreateBuffer( context_, mem_flags, size, mem_flags & CL_MEM_EXT_HOST_PTR_QCOM ? &ionmem : nullptr, &rc); if (CL_SUCCESS != rc) { OVDBG_ERROR("%s: Cannot create cl buffer memory object! rc %d", __func__, rc); return BAD_VALUE; } return 0; } int32_t OpenClKernel::UnMapBuffer(cl_mem &cl_buffer) { if (cl_buffer) { auto rc = clReleaseMemObject(cl_buffer); if (CL_SUCCESS != rc) { OVDBG_ERROR("%s: cannot release buf! rc %d", __func__, rc); return BAD_VALUE; } cl_buffer = nullptr; } return 0; } // todo: add format as input argument int32_t OpenClKernel::MapImage(cl_mem &cl_buffer, void *vaddr, int32_t fd, size_t width, size_t height, uint32_t stride) { cl_int rc; uint32_t row_pitch = 0; assert(context_ != nullptr); cl_image_format format; format.image_channel_data_type = CL_UNSIGNED_INT8; format.image_channel_order = CL_RGBA; clGetDeviceImageInfoQCOM(device_id_, width, height, &format, CL_IMAGE_ROW_PITCH, sizeof(row_pitch), &row_pitch, NULL); if (stride < row_pitch) { OVDBG_ERROR("%s: Error stride: %d platform stride: %d", __func__, stride, row_pitch); return BAD_VALUE; } cl_mem_flags mem_flags = 0; mem_flags |= CL_MEM_READ_WRITE; mem_flags |= CL_MEM_USE_HOST_PTR; mem_flags |= CL_MEM_EXT_HOST_PTR_QCOM; cl_mem_ion_host_ptr ionmem{}; ionmem.ext_host_ptr.allocation_type = CL_MEM_ION_HOST_PTR_QCOM; ionmem.ext_host_ptr.host_cache_policy = CL_MEM_HOST_WRITEBACK_QCOM; ionmem.ion_hostptr = vaddr; ionmem.ion_filedesc = fd; cl_image_desc desc; desc.image_type = CL_MEM_OBJECT_IMAGE2D; desc.image_width = width; desc.image_height = height; desc.image_depth = 0; desc.image_array_size = 0; desc.image_row_pitch = stride; desc.image_slice_pitch = desc.image_row_pitch * desc.image_height; desc.num_mip_levels = 0; desc.num_samples = 0; desc.buffer = nullptr; cl_buffer = clCreateImage( context_, mem_flags, &format, &desc, mem_flags & CL_MEM_EXT_HOST_PTR_QCOM ? &ionmem : nullptr, &rc); if (CL_SUCCESS != rc) { OVDBG_ERROR("%s: Cannot create cl image memory object! rc %d", __func__, rc); return BAD_VALUE; } return 0; } int32_t OpenClKernel::unMapImage(cl_mem &cl_buffer) { return UnMapBuffer(cl_buffer); } int32_t OpenClKernel::SetKernelArgs(OpenClFrame &frame, OpenCLArgs &args) { OVDBG_VERBOSE("%s: Enter ", __func__); cl_uint arg_index = 0;/* */ cl_int cl_err; assert(context_ != nullptr); assert(command_queue_ != nullptr); cl_mem buf_to_process = frame.cl_buffer; cl_mem mask_to_process = args.mask; cl_uint offset_y = frame.plane0_offset + args.y * frame.stride0 + args.x; cl_uint offset_nv = frame.plane1_offset + args.y * frame.stride1 / 2 + args.x; cl_ushort swap_uv = frame.swap_uv; cl_ushort stride = frame.stride0; global_size_[0] = args.width / 2; global_size_[1] = args.height / 2; // __read_only image2d_t mask, // 1 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_mem), &mask_to_process); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } // __global uchar *frame, // 2 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_mem), &buf_to_process); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } // uint y_offset, // 3 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_uint), &offset_y); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } // uint nv_offset, // 4 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_uint), &offset_nv); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } // ushort stride, // 5 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_ushort), &stride); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } // ushort swap_uv // 6 cl_err = clSetKernelArg(kernel_, arg_index++, sizeof(cl_ushort), &swap_uv); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel argument %d. rc: %d ", __func__, arg_index - 1, cl_err); return BAD_VALUE; } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } void OpenClKernel::ClCompleteCallback(cl_event event, cl_int event_command_exec_status, void *user_data) { OVDBG_VERBOSE("%s: Enter ", __func__); if (user_data != nullptr) { struct SyncObject *sync = reinterpret_cast(user_data); std::unique_lock lock(sync->lock_); sync->done_ = true; sync->signal_.Signal(); } clReleaseEvent(event); OVDBG_VERBOSE("%s: Exit ", __func__); } int32_t OpenClKernel::RunCLKernel(bool wait_to_finish) { OVDBG_VERBOSE("%s: Enter ", __func__); cl_int cl_err = CL_SUCCESS; cl_event kernel_event = nullptr; assert(context_ != nullptr); assert(command_queue_ != nullptr); size_t *local_work_size = local_size_[0] + local_size_[1] == 0 ? nullptr : local_size_; cl_err = clEnqueueNDRangeKernel( command_queue_, kernel_, kernel_dimensions_, global_offset_, global_size_, local_work_size, 0, nullptr, wait_to_finish ? &kernel_event : nullptr); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to enqueue Open cl kernel! rc: %d ", __func__, cl_err); return BAD_VALUE; } if (wait_to_finish) { std::lock_guard lock(sync_.lock_); sync_.done_ = false; cl_err = clSetEventCallback(kernel_event, CL_COMPLETE, &ClCompleteCallback, reinterpret_cast(&sync_)); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to set Open cl kernel callback! rc: %d ", __func__, cl_err); return BAD_VALUE; } } if (wait_to_finish) { cl_err = clFlush(command_queue_); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to flush Open cl command queue! rc: %d ", __func__, cl_err); return BAD_VALUE; } std::chrono::nanoseconds wait_time(kWaitProcessTimeout); std::unique_lock lock(sync_.lock_); while (sync_.done_ == false) { auto ret = sync_.signal_.WaitFor(lock, wait_time); if (ret != 0) { OVDBG_ERROR("%s: Timed out on Wait", __func__); return TIMED_OUT; } } } OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } std::string OpenClKernel::CreateCLKernelBuildLog() { cl_int cl_err; size_t log_size; cl_err = clGetProgramBuildInfo(prog_, device_id_, CL_PROGRAM_BUILD_LOG, 0, nullptr, &log_size); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to get Open cl build log size. rc: %d ", __func__, cl_err); return std::string(); } std::string build_log; build_log.reserve(log_size); void *log = static_cast(const_cast(build_log.data())); cl_err = clGetProgramBuildInfo(prog_, device_id_, CL_PROGRAM_BUILD_LOG, log_size, log, nullptr); if (CL_SUCCESS != cl_err) { OVDBG_ERROR("%s: Failed to get Open cl build log. rc: %d ", __func__, cl_err); return std::string(); } return build_log; } #endif // OVERLAY_OPEN_CL_BLIT #ifdef OVERLAY_OPEN_CL_BLIT Overlay::Overlay() : target_c2dsurface_id_(-1), blit_instance_(nullptr), ion_device_(-1), id_(0) { } #else // OVERLAY_OPEN_CL_BLIT Overlay::Overlay() : target_c2dsurface_id_(-1), ion_device_(-1), id_(0) { } #endif // OVERLAY_OPEN_CL_BLIT Overlay::~Overlay() { OVDBG_INFO("%s: Enter ",__func__); for (auto &iter : overlay_items_) { if (iter.second) delete iter.second; } overlay_items_.clear(); if(target_c2dsurface_id_) { c2dDestroySurface(target_c2dsurface_id_); target_c2dsurface_id_ = 0; OVDBG_INFO("%s: Destroyed c2d Target Surface", __func__); } if (ion_device_ != -1) { ion_close(ion_device_); ion_device_ = -1; } OVDBG_INFO("%s: Exit ",__func__); } int32_t Overlay::Init(const TargetBufferFormat& format) { OVDBG_VERBOSE("%s:Enter", __func__); int32_t ret = 0; ion_device_ = ion_open(); if (ion_device_ < 0) { OVDBG_ERROR("%s: Ion dev open failed %s\n", __func__, strerror(errno)); return -1; } #ifdef OVERLAY_OPEN_CL_BLIT blit_instance_ = OpenClKernel::New(BLIT_KERNEL, BLIT_KERNEL_NAME); if (ret) { OVDBG_ERROR("%s: Failed to build blit program", __func__); ion_close(ion_device_); ion_device_ = -1; return BAD_VALUE; } #else // OVERLAY_OPEN_CL_BLIT uint32_t c2dColotFormat = GetC2dColorFormat(format); // Create dummy C2D surface, it is required to Initialize // C2D driver before calling any c2d Apis. C2D_YUV_SURFACE_DEF surface_def = { c2dColotFormat, 1 * 4, 1 * 4, (void*)0xaaaaaaaa, (void*)0xaaaaaaaa, 1 * 4, (void*)0xaaaaaaaa, (void*)0xaaaaaaaa, 1 * 4, (void*)0xaaaaaaaa, (void*)0xaaaaaaaa, 1 * 4, }; ret = c2dCreateSurface(&target_c2dsurface_id_, C2D_TARGET, (C2D_SURFACE_TYPE)(C2D_SURFACE_YUV_HOST | C2D_SURFACE_WITH_PHYS | C2D_SURFACE_WITH_PHYS_DUMMY), &surface_def); if (ret != C2D_STATUS_OK) { ion_close(ion_device_); ion_device_ = -1; OVDBG_ERROR("%s: c2dCreateSurface failed!",__func__); return ret; } #endif // OVERLAY_OPEN_CL_BLIT OVDBG_VERBOSE("%s: Exit",__func__); return ret; } int32_t Overlay::CreateOverlayItem(OverlayParam& param, uint32_t* overlay_id) { OVDBG_VERBOSE("%s:Enter ", __func__); OverlayItem* overlayItem = nullptr; #ifdef OVERLAY_OPEN_CL_BLIT switch (param.type) { case OverlayType::kDateType: overlayItem = new OverlayItemDateAndTime(ion_device_, blit_instance_); break; case OverlayType::kUserText: overlayItem = new OverlayItemText(ion_device_, blit_instance_); break; case OverlayType::kStaticImage: overlayItem = new OverlayItemStaticImage(ion_device_, blit_instance_); break; case OverlayType::kBoundingBox: overlayItem = new OverlayItemBoundingBox(ion_device_, blit_instance_); break; case OverlayType::kPrivacyMask: overlayItem = new OverlayItemPrivacyMask(ion_device_, blit_instance_); break; case OverlayType::kGraph: overlayItem = new OverlayItemGraph(ion_device_, blit_instance_); break; default: OVDBG_ERROR("%s: OverlayType(%d) not supported!", __func__, param.type); break; } #else // OVERLAY_OPEN_CL_BLIT switch (param.type) { case OverlayType::kDateType: overlayItem = new OverlayItemDateAndTime(ion_device_); break; case OverlayType::kUserText: overlayItem = new OverlayItemText(ion_device_); break; case OverlayType::kStaticImage: overlayItem = new OverlayItemStaticImage(ion_device_); break; case OverlayType::kBoundingBox: overlayItem = new OverlayItemBoundingBox(ion_device_); break; case OverlayType::kPrivacyMask: overlayItem = new OverlayItemPrivacyMask(ion_device_); break; case OverlayType::kGraph: overlayItem = new OverlayItemGraph(ion_device_); break; default: OVDBG_ERROR("%s: OverlayType(%d) not supported!", __func__, param.type); break; } #endif // OVERLAY_OPEN_CL_BLIT if(!overlayItem) { OVDBG_ERROR("%s: OverlayItem type(%d) failed!", __func__, param.type); return NO_INIT; } auto ret = overlayItem->Init(param); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s:OverlayItem failed of type(%d)", __func__, param.type); delete overlayItem; return ret; } // StaticImage type overlayItem never be dirty as its contents are static, // all other items are dirty at Init time and will be marked as dirty whenever // their configuration changes at run time after first draw. if(param.type == OverlayType::kStaticImage) { overlayItem->MarkDirty(false); } else { overlayItem->MarkDirty(true); } *overlay_id = ++id_; overlay_items_.insert({*overlay_id, overlayItem}); OVDBG_INFO("%s:OverlayItem Type(%d) Id(%d) Created Successfully !",__func__, param.type, *overlay_id); OVDBG_VERBOSE("%s:Exit ", __func__); return ret; } int32_t Overlay::DeleteOverlayItem(uint32_t overlay_id) { OVDBG_VERBOSE("%s:Enter ", __func__); std::lock_guard lock(lock_); int32_t ret = 0; if(!IsOverlayItemValid(overlay_id)) { OVDBG_ERROR("%s: overlay_id(%d) is not valid!",__func__, overlay_id); return BAD_VALUE; } OverlayItem* overlayItem = overlay_items_.at(overlay_id); assert(overlayItem != nullptr); delete overlayItem; overlay_items_.erase(overlay_id); OVDBG_INFO("%s: overlay_id(%d) & overlayItem(0x%p) Removed from map", __func__, overlay_id, overlayItem); OVDBG_VERBOSE("%s:Exit ", __func__); return ret; } int32_t Overlay::GetOverlayParams(uint32_t overlay_id, OverlayParam& param) { int32_t ret = 0; if(!IsOverlayItemValid(overlay_id)) { OVDBG_ERROR("%s: overlay_id(%d) is not valid!",__func__, overlay_id); return BAD_VALUE; } OverlayItem* overlayItem = overlay_items_.at(overlay_id); assert(overlayItem != nullptr); memset(¶m, 0x0, sizeof param); overlayItem->GetParameters(param); return ret; } int32_t Overlay::UpdateOverlayParams(uint32_t overlay_id, OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ", __func__); std::lock_guard lock(lock_); if(!IsOverlayItemValid(overlay_id)) { OVDBG_ERROR("%s: overlay_id(%d) is not valid!",__func__, overlay_id); return BAD_VALUE; } OverlayItem* overlayItem = overlay_items_.at(overlay_id); assert(overlayItem != nullptr); OVDBG_VERBOSE("%s:Exit ", __func__); return overlayItem->UpdateParameters(param); } int32_t Overlay::EnableOverlayItem(uint32_t overlay_id) { OVDBG_VERBOSE("%s: Enter", __func__); std::lock_guard lock(lock_); int32_t ret = 0; if(!IsOverlayItemValid(overlay_id)) { OVDBG_ERROR("%s: overlay_id(%d) is not valid!",__func__, overlay_id); return BAD_VALUE; } OverlayItem* overlayItem = overlay_items_.at(overlay_id); assert(overlayItem != nullptr); overlayItem->Activate(true); OVDBG_DEBUG("%s: OverlayItem Id(%d) Activated", __func__, overlay_id); OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t Overlay::DisableOverlayItem(uint32_t overlay_id) { OVDBG_VERBOSE("%s: Enter", __func__); std::lock_guard lock(lock_); int32_t ret = 0; if(!IsOverlayItemValid(overlay_id)) { OVDBG_ERROR("%s: overlay_id(%d) is not valid!",__func__, overlay_id); return BAD_VALUE; } OverlayItem* overlayItem = overlay_items_.at(overlay_id); assert(overlayItem != nullptr); overlayItem->Activate(false); OVDBG_DEBUG("%s: OverlayItem Id(%d) DeActivated", __func__, overlay_id); OVDBG_VERBOSE("%s: Exit", __func__); return ret; } #ifdef OVERLAY_OPEN_CL_BLIT int32_t Overlay::ApplyOverlay(const OverlayTargetBuffer& buffer) { OVDBG_VERBOSE("%s: Enter", __func__); #ifdef DEBUG_BLIT_TIME auto start_time = ::std::chrono::high_resolution_clock::now(); #endif int32_t ret = 0; int32_t obj_idx = 0; std::lock_guard lock(lock_); size_t numActiveOverlays = 0; bool isItemsActive = false; for (auto &iter : overlay_items_) { if ((iter).second->IsActive()) { isItemsActive = true; } } if (!isItemsActive) { OVDBG_VERBOSE("%s: No overlayItem is Active!", __func__); return ret; } assert(buffer.ion_fd != 0); assert(buffer.width != 0 && buffer.height != 0); assert(buffer.frame_len != 0); OVDBG_VERBOSE("%s:OverlayTargetBuffer: ion_fd = %d",__func__, buffer.ion_fd); OVDBG_VERBOSE("%s:OverlayTargetBuffer: Width = %d & Height = %d & frameLength" " =% d", __func__, buffer.width, buffer.height, buffer.frame_len); OVDBG_VERBOSE("%s: OverlayTargetBuffer: format = %d", __func__, buffer.format); void* bufVaddr = mmap(nullptr, buffer.frame_len, PROT_READ | PROT_WRITE, MAP_SHARED, buffer.ion_fd, 0); if (!bufVaddr) { OVDBG_ERROR("%s: mmap failed!", __func__); return UNKNOWN_ERROR; } SyncStart(buffer.ion_fd); // map buffer OpenClFrame in_frame; ret = OpenClKernel::MapBuffer(in_frame.cl_buffer, bufVaddr, buffer.ion_fd, buffer.frame_len); if (ret) { OVDBG_ERROR("%s: Fail to map buffer to Open CL!", __func__); munmap(bufVaddr, buffer.frame_len); return UNKNOWN_ERROR; } // Iterate all dirty overlay Items, and update them. for (auto &iter : overlay_items_) { if ((iter).second->IsActive()) { ret = (iter).second->UpdateAndDraw(); if (ret) { OVDBG_ERROR("%s: Update & Draw failed for Item=%d", __func__, (iter).first); } } } // Get config from overlay instances std::vector draw_infos; for (auto &iter : overlay_items_) { OverlayItem* overlay_item = (iter).second; if (overlay_item->IsActive()) { overlay_item->GetDrawInfo(buffer.width, buffer.height, draw_infos); } } in_frame.plane0_offset = 0; if (buffer.format == TargetBufferFormat::kYUVNV12) { in_frame.stride0 = VENUS_Y_STRIDE(COLOR_FMT_NV12, buffer.width); in_frame.stride1 = VENUS_UV_STRIDE(COLOR_FMT_NV12, buffer.width); in_frame.plane1_offset = in_frame.stride0 * VENUS_Y_SCANLINES(COLOR_FMT_NV12, buffer.height); in_frame.swap_uv = false; } else { in_frame.stride0 = VENUS_Y_STRIDE(COLOR_FMT_NV21, buffer.width); in_frame.stride1 = VENUS_UV_STRIDE(COLOR_FMT_NV21, buffer.width); in_frame.plane1_offset = in_frame.stride0 * VENUS_Y_SCANLINES(COLOR_FMT_NV21, buffer.height); in_frame.swap_uv = true; } // Configure kernels for (auto &item : draw_infos) { OpenCLArgs args; args.width = item.width; args.height = item.height; args.x = item.x; args.y = item.y; args.mask = item.mask; item.blit_inst->SetKernelArgs(in_frame, args); } // Apply kernels for (int i = 0; i < draw_infos.size(); i++) { draw_infos[i].blit_inst->RunCLKernel(i == draw_infos.size() - 1); } // unmap buffer OpenClKernel::UnMapBuffer(in_frame.cl_buffer); EXIT: if (bufVaddr) { if (buffer.ion_fd) SyncEnd(buffer.ion_fd); munmap(bufVaddr, buffer.frame_len); bufVaddr = nullptr; } #ifdef DEBUG_BLIT_TIME auto end_time = ::std::chrono::high_resolution_clock::now(); auto diff = ::std::chrono::duration_cast<::std::chrono::milliseconds> (end_time - start_time).count(); OVDBG_INFO("%s: Time taken in 2D draw + Blit=%lld ms", __func__, diff); #endif OVDBG_VERBOSE("%s: Exit ",__func__); return ret; } #else // OVERLAY_OPEN_CL_BLIT int32_t Overlay::ApplyOverlay(const OverlayTargetBuffer& buffer) { OVDBG_VERBOSE("%s: Enter", __func__); #ifdef DEBUG_BLIT_TIME auto start_time = ::std::chrono::high_resolution_clock::now(); #endif int32_t ret = 0; int32_t obj_idx = 0; std::lock_guard lock(lock_); size_t numActiveOverlays = 0; bool isItemsActive = false; for (auto &iter : overlay_items_) { if ((iter).second->IsActive()) { isItemsActive = true; } } if(!isItemsActive) { OVDBG_VERBOSE("%s: No overlayItem is Active!", __func__); return ret; } assert(buffer.ion_fd != 0); assert(buffer.width != 0 && buffer.height != 0); assert(buffer.frame_len != 0); OVDBG_VERBOSE("%s:OverlayTargetBuffer: ion_fd = %d",__func__, buffer.ion_fd); OVDBG_VERBOSE("%s:OverlayTargetBuffer: Width = %d & Height = %d & frameLength" " =% d", __func__, buffer.width, buffer.height, buffer.frame_len); OVDBG_VERBOSE("%s: OverlayTargetBuffer: format = %d", __func__, buffer.format); void* bufVaddr = mmap(nullptr, buffer.frame_len, PROT_READ | PROT_WRITE, MAP_SHARED, buffer.ion_fd, 0); if(!bufVaddr) { OVDBG_ERROR("%s: mmap failed!", __func__); return UNKNOWN_ERROR; } SyncStart(buffer.ion_fd); // Map input YUV buffer to GPU. void *gpuAddr = nullptr; ret = c2dMapAddr(buffer.ion_fd, bufVaddr, buffer.frame_len, 0, KGSL_USER_MEM_TYPE_ION, &gpuAddr); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dMapAddr failed!",__func__); goto EXIT; } // Target surface format. C2D_YUV_SURFACE_DEF surface_def; surface_def.format = GetC2dColorFormat(buffer.format); surface_def.width = buffer.width; surface_def.height = buffer.height; int32_t planeYLen; switch (surface_def.format) { case C2D_COLOR_FORMAT_420_NV12: //Y plane stride. surface_def.stride0 = VENUS_Y_STRIDE(COLOR_FMT_NV12, surface_def.width); //UV plane stride. surface_def.stride1 = VENUS_UV_STRIDE(COLOR_FMT_NV12, surface_def.width); //UV plane hostptr. planeYLen = surface_def.stride0 * VENUS_Y_SCANLINES(COLOR_FMT_NV12, surface_def.height); break; case C2D_COLOR_FORMAT_420_NV21: //Y plane stride. surface_def.stride0 = VENUS_Y_STRIDE(COLOR_FMT_NV21, surface_def.width); //UV plane stride. surface_def.stride1 = VENUS_UV_STRIDE(COLOR_FMT_NV21, surface_def.width); //UV plane hostptr. planeYLen = surface_def.stride0 * VENUS_Y_SCANLINES(COLOR_FMT_NV21, surface_def.height); break; case (C2D_COLOR_FORMAT_420_NV12 | C2D_FORMAT_UBWC_COMPRESSED): //Y plane stride. surface_def.stride0 = VENUS_Y_STRIDE(COLOR_FMT_NV12_UBWC, surface_def.width); //UV plane stride. surface_def.stride1 = VENUS_UV_STRIDE(COLOR_FMT_NV12_UBWC, surface_def.width); //UV plane hostptr. planeYLen = ROUND_TO( VENUS_Y_META_STRIDE(COLOR_FMT_NV12_UBWC, surface_def.width) * VENUS_Y_META_SCANLINES(COLOR_FMT_NV12_UBWC, surface_def.height), 4096) + ROUND_TO(surface_def.stride0 * VENUS_Y_SCANLINES(COLOR_FMT_NV12_UBWC, surface_def.height), 4096); break; default: OVDBG_ERROR("%s: Unknown format: %d", __func__, surface_def.format); goto EXIT; } OVDBG_DEBUG("%s: surface_def.stride0 = %d ",__func__, surface_def.stride0); OVDBG_DEBUG("%s: planeYLen = %d",__func__, planeYLen); //Y plane hostptr. surface_def.plane0 = (void*)bufVaddr; //Y plane Gpu address. surface_def.phys0 = (void*)gpuAddr; surface_def.plane1 = (void*)((intptr_t)bufVaddr + planeYLen); //UV plane Gpu address. surface_def.phys1 = (void*)((intptr_t)gpuAddr + planeYLen); //Create C2d target surface outof camera buffer. camera buffer //is target surface where c2d blits different types of overlays //static logo, system time and date. ret = c2dUpdateSurface(target_c2dsurface_id_, C2D_SOURCE, (C2D_SURFACE_TYPE)(C2D_SURFACE_YUV_HOST |C2D_SURFACE_WITH_PHYS), &surface_def); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dUpdateSurface failed!",__func__); goto EXIT; } // Iterate all dirty overlay Items, and update them. for (auto &iter : overlay_items_) { if ((iter).second->IsActive()) { ret = (iter).second->UpdateAndDraw(); if(ret != 0) { OVDBG_ERROR("%s: Update & Draw failed for Item=%d", __func__, (iter).first); } } } C2dObjects c2d_objects; memset(&c2d_objects, 0x0, sizeof c2d_objects); // Iterate all updated overlayItems, and get coordinates. for (auto &iter : overlay_items_) { std::vector draw_infos; OverlayItem* overlay_item = (iter).second; if (overlay_item->IsActive()) { overlay_item->GetDrawInfo(buffer.width, buffer.height, draw_infos); auto info_size = draw_infos.size(); for (auto i = 0; i < info_size; i++) { c2d_objects.objects[obj_idx].surface_id = draw_infos[i].c2dSurfaceId; c2d_objects.objects[obj_idx].config_mask = C2D_ALPHA_BLEND_SRC_ATOP | C2D_TARGET_RECT_BIT; if (draw_infos[i].in_width) { c2d_objects.objects[obj_idx].config_mask |= C2D_SOURCE_RECT_BIT; c2d_objects.objects[obj_idx].source_rect.x = draw_infos[i].in_x << 16; c2d_objects.objects[obj_idx].source_rect.y = draw_infos[i].in_y << 16; c2d_objects.objects[obj_idx].source_rect.width = draw_infos[i].in_width << 16; c2d_objects.objects[obj_idx].source_rect.height = draw_infos[i].in_height << 16; } c2d_objects.objects[obj_idx].target_rect.x = draw_infos[i].x << 16; c2d_objects.objects[obj_idx].target_rect.y = draw_infos[i].y << 16; c2d_objects.objects[obj_idx].target_rect.width = draw_infos[i].width << 16; c2d_objects.objects[obj_idx].target_rect.height = draw_infos[i].height << 16; OVDBG_VERBOSE("%s: c2d_objects[%d].surface_id=%d", __func__, obj_idx, c2d_objects.objects[obj_idx].surface_id); OVDBG_VERBOSE("%s: c2d_objects[%d].target_rect.x=%d", __func__, obj_idx, draw_infos[i].x); OVDBG_VERBOSE("%s: c2d_objects[%d].target_rect.y=%d", __func__, obj_idx, draw_infos[i].y); OVDBG_VERBOSE("%s: c2d_objects[%d].target_rect.width=%d", __func__, obj_idx, draw_infos[i].width); OVDBG_VERBOSE("%s: c2d_objects[%d].target_rect.height=%d", __func__, obj_idx, draw_infos[i].height); ++numActiveOverlays; ++obj_idx; } } } OVDBG_VERBOSE("%s: numActiveOverlays=%d", __func__, numActiveOverlays); for(size_t i = 0; i < (numActiveOverlays-1); i++) { c2d_objects.objects[i].next = &c2d_objects.objects[i+1]; } ret = c2dDraw(target_c2dsurface_id_, 0, 0, 0, 0, c2d_objects.objects, numActiveOverlays); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dDraw failed!",__func__); goto EXIT; } ret = c2dFinish(target_c2dsurface_id_); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dFinish failed!",__func__); goto EXIT; } // Unmap camera buffer from GPU after draw is completed. ret = c2dUnMapAddr(gpuAddr); if(ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dUnMapAddr failed!",__func__); goto EXIT; } EXIT: if (bufVaddr) { if (buffer.ion_fd) SyncEnd(buffer.ion_fd); munmap(bufVaddr, buffer.frame_len); bufVaddr = nullptr; } #ifdef DEBUG_BLIT_TIME auto end_time = ::std::chrono::high_resolution_clock::now(); auto diff = ::std::chrono::duration_cast<::std::chrono::milliseconds> (end_time - start_time).count(); OVDBG_INFO("%s: Time taken in 2D draw + Blit=%lld ms", __func__, diff); #endif OVDBG_VERBOSE("%s: Exit ",__func__); return ret; } #endif // OVERLAY_OPEN_CL_BLIT int32_t Overlay::ProcessOverlayItems( const std::vector& overlay_list) { OVDBG_VERBOSE("%s: Enter", __func__); std::lock_guard lock(lock_); int32_t ret = 0; uint32_t overlay_id = 0; uint32_t size = overlay_list.size(); uint32_t num_items = overlay_items_.size(); if (num_items < size) { auto overlay_param = overlay_list.at(0); for (auto i = 0; i < 10; i++) { ret = CreateOverlayItem(overlay_param, &overlay_id); if (ret) { OVDBG_ERROR("%s: CreateOverlayItem failed for id:%u!!", __func__, overlay_id); return ret; } } } // Check overlay_items_ size and allocate in chunks of 10 // If request size is greater than available allocate more // Remove active flag OVDBG_VERBOSE("%s: size:%u num_items:%u", __func__, size, num_items); auto items_iter = overlay_items_.begin(); OverlayItem* overlayItem = nullptr; for (auto index = 0; index < size; index++, items_iter++) { auto overlay_param = overlay_list.at(index); overlay_id = items_iter->first; overlayItem = items_iter->second; OVDBG_VERBOSE("%s:id:%u w: %u h:%u", __func__, overlay_id, overlay_param.dst_rect.width, overlay_param.dst_rect.height); ret = overlayItem->UpdateParameters(overlay_param); if (ret) { OVDBG_ERROR("%s: UpdateParameters failed for id: %u!", __func__, overlay_id); return ret; } if (!overlayItem->IsActive()) { overlayItem->Activate(true); OVDBG_DEBUG("%s: OverlayItem Id(%d) Activated", __func__, overlay_id); } else { OVDBG_DEBUG("%s: OverlayItem Id(%d) already Activated", __func__, overlay_id); } } // Disable inactive overlay while (items_iter != overlay_items_.end()) { overlay_id = items_iter->first; overlayItem = items_iter->second; if (overlayItem->IsActive()) { OVDBG_DEBUG("%s: Disable overlayItem for id: %u!", __func__, overlay_id); overlayItem->Activate(false); } items_iter++; } return ret; OVDBG_VERBOSE("%s: Exit", __func__); } int32_t Overlay::DeleteOverlayItems() { OVDBG_VERBOSE("%s: Enter", __func__); std::lock_guard lock(lock_); int32_t ret = 0; uint32_t overlay_id = 0; OverlayItem* overlayItem = nullptr; auto items_iter = overlay_items_.begin(); while (items_iter != overlay_items_.end()) { overlay_id = items_iter->first; overlayItem = items_iter->second; assert(overlayItem != nullptr); delete overlayItem; overlay_items_.erase(overlay_id); OVDBG_INFO("%s: overlay_id(%d) & overlayItem(0x%p) Removed from map", __func__, overlay_id, overlayItem); items_iter++; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } uint32_t Overlay::GetC2dColorFormat(const TargetBufferFormat& format) { uint32_t c2dColorFormat = C2D_COLOR_FORMAT_420_NV12; switch (format) { case TargetBufferFormat::kYUVNV12: c2dColorFormat = C2D_COLOR_FORMAT_420_NV12; break; case TargetBufferFormat::kYUVNV21: c2dColorFormat = C2D_COLOR_FORMAT_420_NV21; break; case TargetBufferFormat::kYUVNV12UBWC: c2dColorFormat = C2D_COLOR_FORMAT_420_NV12 | C2D_FORMAT_UBWC_COMPRESSED; break; default: OVDBG_ERROR("%s: Unsupported buffer format: %d", __func__, format); break; } OVDBG_VERBOSE("%s:Selected C2D ColorFormat=%d",__func__, c2dColorFormat); return c2dColorFormat; } bool Overlay::IsOverlayItemValid(uint32_t overlay_id) { OVDBG_DEBUG("%s: Enter overlay_id(%d)",__func__, overlay_id); bool valid = false; for (auto& iter : overlay_items_) { if (overlay_id == (iter).first) { valid = true; break; } } OVDBG_DEBUG("%s: Exit overlay_id(%d)",__func__, overlay_id); return valid; } #ifdef OVERLAY_OPEN_CL_BLIT OverlayItem::OverlayItem(int32_t ion_device, OverlayType type, std::shared_ptr &blit) : surface_(), location_type_(OverlayLocationType::kBottomLeft), dirty_(false), ion_device_(ion_device), type_(type), is_active_(false) { OVDBG_VERBOSE("%s:Enter ", __func__); #if USE_CAIRO cr_surface_ = nullptr; cr_context_ = nullptr; #endif if (blit.get()) { // Create local instance of blit kernel surface_.blit_inst_ = blit->AddInstance(); } OVDBG_VERBOSE("%s:Exit ", __func__); } #else // OVERLAY_OPEN_CL_BLIT OverlayItem::OverlayItem(int32_t ion_device, OverlayType type) : surface_(), location_type_(OverlayLocationType::kBottomLeft), dirty_(false), ion_device_(ion_device), type_(type), is_active_(false) { OVDBG_VERBOSE("%s:Enter ", __func__); #if USE_CAIRO cr_surface_ = nullptr; cr_context_ = nullptr; #endif OVDBG_VERBOSE("%s:Exit ", __func__); } #endif // OVERLAY_OPEN_CL_BLIT OverlayItem::~OverlayItem() { DestroySurface(); } void OverlayItem::MarkDirty(bool dirty) { dirty_ = dirty; OVDBG_VERBOSE("%s: OverlayItem Type(%d) marked dirty!", __func__, type_); } void OverlayItem::Activate(bool value) { is_active_ = value; OVDBG_VERBOSE("%s: OverlayItem Type(%d) Activated!", __func__, type_); } int32_t OverlayItem::AllocateIonMemory(IonMemInfo& mem_info, uint32_t size) { OVDBG_VERBOSE("%s:Enter", __func__); int32_t ret = 0; void* data = nullptr; uint32_t flags = ION_FLAG_CACHED; int32_t map_fd = -1; uint32_t heap_id_mask = ION_HEAP(ION_SYSTEM_HEAP_ID); size = ROUND_TO(size, 4096); ret = ion_alloc_fd(ion_device_, size, 0, heap_id_mask, flags, &map_fd); if (ret) { OVDBG_ERROR("%s:ION allocation failed\n", __func__); return -1; } data = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, 0); if (data == MAP_FAILED) { OVDBG_ERROR("%s:ION mmap failed: %s (%d)\n", __func__, strerror(errno), errno); goto ION_MAP_FAILED; } SyncStart(map_fd); mem_info.fd = map_fd; mem_info.size = size; mem_info.vaddr = data; OVDBG_VERBOSE("%s:Exit ", __func__); return ret; ION_MAP_FAILED: close(map_fd); return -1; } void OverlayItem::FreeIonMemory(void *&vaddr, int32_t &ion_fd, uint32_t size) { if (vaddr) { if (ion_fd != -1) SyncEnd(ion_fd); munmap(vaddr, size); vaddr = nullptr; } if (ion_fd != -1) { close(ion_fd); ion_fd = -1; } } int32_t OverlayItem::MapOverlaySurface(OverlaySurface &surface, IonMemInfo &mem_info, int32_t format) { OVDBG_VERBOSE("%s:Enter ", __func__); int32_t ret = 0; #ifdef OVERLAY_OPEN_CL_BLIT ret = OpenClKernel::MapImage(surface.cl_buffer_, mem_info.vaddr, mem_info.fd, surface.width_, surface.height_, surface.width_ * 4); if (ret) { OVDBG_ERROR("%s: Failed to map image!",__func__); return -1; } #else // OVERLAY_OPEN_CL_BLIT ret = c2dMapAddr(mem_info.fd, mem_info.vaddr, mem_info.size, 0, KGSL_USER_MEM_TYPE_ION, &surface.gpu_addr_); if (ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dMapAddr failed!",__func__); return -1; } C2D_RGB_SURFACE_DEF c2dSurfaceDef; c2dSurfaceDef.format = format; c2dSurfaceDef.width = surface.width_; c2dSurfaceDef.height = surface.height_; c2dSurfaceDef.buffer = mem_info.vaddr; c2dSurfaceDef.phys = surface.gpu_addr_; c2dSurfaceDef.stride = surface.width_ * 4; // Create source c2d surface. ret = c2dCreateSurface(&surface.c2dsurface_id_, C2D_SOURCE, (C2D_SURFACE_TYPE)(C2D_SURFACE_RGB_HOST | C2D_SURFACE_WITH_PHYS), &c2dSurfaceDef); if (ret != C2D_STATUS_OK) { OVDBG_ERROR("%s: c2dCreateSurface failed!",__func__); c2dUnMapAddr(surface.gpu_addr_); surface.gpu_addr_ = nullptr; return -1; } #endif // OVERLAY_OPEN_CL_BLIT surface.ion_fd_ = mem_info.fd; surface.vaddr_ = mem_info.vaddr; surface.size_ = mem_info.size; OVDBG_VERBOSE("%s: Exit ", __func__); return 0; } void OverlayItem::UnMapOverlaySurface(OverlaySurface &surface) { #ifdef OVERLAY_OPEN_CL_BLIT OpenClKernel::unMapImage(surface_.cl_buffer_); #else // OVERLAY_OPEN_CL_BLIT if (surface.gpu_addr_) { c2dUnMapAddr(surface.gpu_addr_); surface.gpu_addr_ = nullptr; OVDBG_INFO("%s: Unmapped text GPU address for type(%d)", __func__, type_); } if (surface.c2dsurface_id_) { c2dDestroySurface(surface.c2dsurface_id_); surface.c2dsurface_id_ = -1; OVDBG_INFO("%s: Destroyed c2d text Surface for type(%d)", __func__, type_); } #endif // OVERLAY_OPEN_CL_BLIT } void OverlayItem::ExtractColorValues(uint32_t hex_color, RGBAValues* color) { color->red = ((hex_color >> 24) & 0xff) / 255.0; color->green = ((hex_color >> 16) & 0xff) / 255.0; color->blue = ((hex_color >> 8) & 0xff) / 255.0; color->alpha = ((hex_color) & 0xff) / 255.0; } void OverlayItem::ClearSurface() { #if USE_CAIRO RGBAValues bg_color; memset(&bg_color, 0x0, sizeof bg_color); // Painting entire surface with background color or with fully transparent // color doesn't work since cairo uses the OVER compositing operator // by default, and blending something entirely transparent OVER something // else has no effect at all until compositing operator is changed to SOURCE, // the SOURCE operator copies both color and alpha values directly from the // source to the destination instead of blending. #ifdef DEBUG_BACKGROUND_SURFACE ExtractColorValues(BG_DEBUG_COLOR, &bg_color); cairo_set_source_rgba(cr_context_, bg_color.red, bg_color.green, bg_color.blue, bg_color.alpha); cairo_set_operator(cr_context_, CAIRO_OPERATOR_SOURCE); #else cairo_set_operator(cr_context_, CAIRO_OPERATOR_CLEAR); #endif cairo_paint(cr_context_); cairo_surface_flush(cr_surface_); cairo_set_operator(cr_context_, CAIRO_OPERATOR_OVER); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_surface_mark_dirty(cr_surface_); #endif } void OverlayItem::DestroySurface() { OVDBG_VERBOSE("%s: Enter", __func__); MarkDirty(true); UnMapOverlaySurface(surface_); FreeIonMemory(surface_.vaddr_, surface_.ion_fd_, surface_.size_); #if USE_CAIRO if (cr_surface_) { cairo_surface_destroy(cr_surface_); } if (cr_context_) { cairo_destroy(cr_context_); } #endif OVDBG_VERBOSE("%s: Exit", __func__); } OverlayItemStaticImage::~OverlayItemStaticImage() { OVDBG_VERBOSE("%s: Enter", __func__); image_path_.clear(); OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemStaticImage::DestroySurface() { OVDBG_VERBOSE("%s: Enter", __func__); MarkDirty(true); UnMapOverlaySurface(surface_); FreeIonMemory(surface_.vaddr_, surface_.ion_fd_, surface_.size_); OVDBG_VERBOSE("%s: Exit", __func__); } int32_t OverlayItemStaticImage::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); int32_t ret = 0; if(param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { OVDBG_ERROR("%s: Image Width & Height is not correct!", __func__); return BAD_VALUE; } location_type_ = param.location; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; image_type_ = param.image_info.image_type; if (param.image_info.image_type == OverlayImageType::kFilePath) { image_path_.setTo(param.image_info.image_location, strlen(param.image_info.image_location) + 1); } else if (param.image_info.image_type == OverlayImageType::kBlobType) { image_buffer_ = param.image_info.image_buffer; image_size_ = param.image_info.image_size; surface_.width_ = param.image_info.source_rect.width; surface_.height_ = param.image_info.source_rect.height; OVDBG_VERBOSE("%s: image blob image_buffer_::0x%p image_size_::%u " "image_width_::%u image_height_::%u ", __func__, image_buffer_, image_size_, surface_.width_, surface_.height_); char prop_val[PROPERTY_VALUE_MAX]; property_get(PROP_DUMP_BLOB_IMAGE, prop_val, "0"); blob_image_dump_enabled_ = (atoi(prop_val) == 0) ? false : true; if (blob_image_dump_enabled_) { FILE* pFile; pFile = fopen("/data/misc/qmmf/overlay_image_blob.rgb","wb"); if (pFile ){ fwrite(image_buffer_, sizeof(char), image_size_, pFile); fclose(pFile); } } crop_rect_x_ = param.image_info.source_rect.start_x; crop_rect_y_ = param.image_info.source_rect.start_y; crop_rect_width_ = param.image_info.source_rect.width; crop_rect_height_ = param.image_info.source_rect.height; OVDBG_VERBOSE("%s: image blob crop_rect_x_::%u crop_rect_y_::%u " "crop_rect_width_::%u crop_rect_height_::%u", __func__, crop_rect_x_, crop_rect_y_,crop_rect_width_, crop_rect_height_); } ret = CreateSurface(); if(ret != 0) { OVDBG_ERROR("%s: createLogoSurface failed!", __func__); return ret; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemStaticImage::UpdateAndDraw() { #ifndef OVERLAY_OPEN_CL_BLIT // Nothing to update, contents are static. // Never marked as dirty. std::lock_guard lock(update_param_lock_); if (blob_buffer_updated_) { c2dSurfaceUpdated(surface_.c2dsurface_id_, nullptr); } #endif // OVERLAY_OPEN_CL_BLIT return OK; } void OverlayItemStaticImage::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s: Enter", __func__); DrawInfo draw_info; memset(&draw_info, 0x0, sizeof(DrawInfo)); draw_info.width = width_; draw_info.height = height_; int32_t xMargin = targetWidth * OVERLAYITEM_X_MARGIN_PERCENT/100; int32_t yMargin = targetHeight * OVERLAYITEM_Y_MARGIN_PERCENT/100; int32_t x = 0; int32_t y = 0; switch (location_type_) { case OverlayLocationType::kTopLeft: x = xMargin; y = yMargin; break; case OverlayLocationType::kTopRight: x = targetWidth - (width_ + xMargin); y = yMargin; break; case OverlayLocationType::kCenter: x = (targetWidth - width_)/2; y = (targetHeight - height_)/2; break; case OverlayLocationType::kBottomLeft: x = xMargin; y = targetHeight - (height_ + yMargin); break; case OverlayLocationType::kBottomRight: x = targetWidth - (width_ + xMargin); y = targetHeight - (height_ + yMargin); break; case OverlayLocationType::kRandom: x = x_; y = y_; break; case OverlayLocationType::kNone: default: x = x_; y = y_; break; } draw_info.x = x; draw_info.y = y; #ifdef OVERLAY_OPEN_CL_BLIT draw_info.mask = surface_.cl_buffer_; draw_info.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT if (width_ != crop_rect_width_ || height_ != crop_rect_height_) { draw_info.in_width = crop_rect_width_; draw_info.in_height = crop_rect_height_; draw_info.in_x = crop_rect_x_; draw_info.in_y = crop_rect_y_; } else { draw_info.in_width = 0; draw_info.in_height = 0; draw_info.in_x = 0; draw_info.in_y = 0; } draw_infos.push_back(draw_info); OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemStaticImage::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kStaticImage; param.location = location_type_; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; std::string str(image_path_.string()); str.copy(param.image_info.image_location, image_path_.length()); OVDBG_VERBOSE("%s:Exit ",__func__); } int32_t OverlayItemStaticImage::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); std::lock_guard lock(update_param_lock_); int32_t ret = 0; if(strcmp(image_path_.string(), param.image_info.image_location) != 0) { OVDBG_ERROR("%s: Image Path Can't be changed at run time!!", __func__); return BAD_VALUE; } if(param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { OVDBG_ERROR("%s: Image Width & Height is not correct!", __func__); return BAD_VALUE; } location_type_ = param.location; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; if (image_type_ == OverlayImageType::kBlobType) { image_buffer_ = param.image_info.image_buffer; image_size_ = param.image_info.image_size; surface_.width_ = param.image_info.source_rect.width; surface_.height_ = param.image_info.source_rect.height; OVDBG_DEBUG("%s: updated image blob image_buffer_::0x%p image_size_::%u " "image_width_::%u image_height_::%u ", __func__, image_buffer_, param.image_info.image_size, surface_.width_, surface_.height_); crop_rect_x_ = param.image_info.source_rect.start_x; crop_rect_y_ = param.image_info.source_rect.start_y; crop_rect_width_ = param.image_info.source_rect.width; crop_rect_height_ = param.image_info.source_rect.height; OVDBG_DEBUG("%s: updated image blob crop_rect_x_::%u crop_rect_y_::%u " "crop_rect_width_::%u crop_rect_height_::%u", __func__, crop_rect_x_, crop_rect_y_,crop_rect_width_, crop_rect_height_); if (blob_image_dump_enabled_) { String8 blobbuffer_filepath; struct timeval tv; gettimeofday(&tv, nullptr); blobbuffer_filepath.appendFormat("/data/misc/qmmf/overlay_blob_buffer_%lu.%s", tv.tv_sec, "rgb"); blob_buffer_file_fd_ = open(blobbuffer_filepath.string(), O_CREAT | O_WRONLY | O_TRUNC, 0655); assert(blob_buffer_file_fd_ >= 0); uint32_t bytes_written; bytes_written = write(blob_buffer_file_fd_, image_buffer_, param.image_info.image_size); if (bytes_written != param.image_info.image_size) { OVDBG_ERROR("Bytes written != %d and written = %u", bytes_written, param.image_info.image_size); } close(blob_buffer_file_fd_); } // only buffer content is changed not buffer size if (param.image_info.buffer_updated && (param.image_info.image_size == image_size_)) { OVDBG_DEBUG("%s: updated image_size_:: %u param.image_info.image_size:: %u ", __func__, image_size_, param.image_info.image_size); uint32_t size = param.image_info.image_size; uint32_t* pixels = static_cast(surface_.vaddr_); memcpy(pixels, image_buffer_, size); blob_buffer_updated_ = param.image_info.buffer_updated; MarkDirty(true); } else if (param.image_info.image_size != image_size_) { image_size_ = param.image_info.image_size; DestroySurface(); ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return ret; } } image_size_= param.image_info.image_size; } OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemStaticImage::CreateSurface() { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; int32_t format; uint32_t size; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); if (image_type_ == OverlayImageType::kFilePath) { size = width_ * height_ * 4; ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } uint32_t* pixels = (uint32_t*)mem_info.vaddr; //Load raw logo image file. FILE *file = 0; size_t bytes; file = fopen(image_path_.string(), "rb"); if(file) { bytes = fread(pixels, 1, size, file); OVDBG_INFO("%s: Total btyes = %d",__func__,bytes); if(bytes != size) { OVDBG_ERROR("%s: Raw file format is not correct",__func__); fclose(file); goto ERROR; } fclose(file); } else { OVDBG_ERROR("%s: (%s)File open Failed!!",__func__, image_path_.string()); goto ERROR; } } else if(image_type_ == OverlayImageType::kBlobType){ size = image_size_; ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } uint32_t* pixels = static_cast(mem_info.vaddr); memcpy(pixels, image_buffer_, size); } format = C2D_FORMAT_SWAP_ENDIANNESS | C2D_COLOR_FORMAT_8888_RGBA; ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } OVDBG_VERBOSE("%s: Exit ",__func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; return ret; } #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemDateAndTime::OverlayItemDateAndTime(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kDateType, blit) { OVDBG_VERBOSE("%s:Enter ", __func__); memset(&date_time_type_, 0x0, sizeof date_time_type_); date_time_type_.time_format = OverlayTimeFormatType::kHHMM_24HR; date_time_type_.date_format = OverlayDateFormatType::kMMDDYYYY; OVDBG_VERBOSE("%s:Exit", __func__); } #else // OVERLAY_OPEN_CL_BLIT OverlayItemDateAndTime::OverlayItemDateAndTime(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kDateType) { OVDBG_VERBOSE("%s:Enter ", __func__); memset(&date_time_type_, 0x0, sizeof date_time_type_); date_time_type_.time_format = OverlayTimeFormatType::kHHMM_24HR; date_time_type_.date_format = OverlayDateFormatType::kMMDDYYYY; OVDBG_VERBOSE("%s:Exit", __func__); } #endif // OVERLAY_OPEN_CL_BLIT OverlayItemDateAndTime::~OverlayItemDateAndTime() { OVDBG_VERBOSE("%s:Enter ", __func__); OVDBG_VERBOSE("%s:Exit ", __func__); } int32_t OverlayItemDateAndTime::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } location_type_ = param.location; text_color_ = param.color; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; prev_time_ = 0; date_time_type_.date_format = param.date_time.date_format; date_time_type_.time_format = param.date_time.time_format; // Create surface with the same aspect ratio surface_.width_ = ROUND_TO(kCairoBufferMinWidth, 16); surface_.height_ = kCairoBufferMinWidth * height_ / width_; // Recalculate if surface height is less than minimum if (surface_.height_ < kCairoBufferMinHeight) { surface_.height_ = kCairoBufferMinHeight; surface_.width_ = ROUND_TO(kCairoBufferMinHeight * width_ / height_, 16); // recalculated height according to aligned width surface_.height_ = surface_.width_ * height_ / width_; } OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); auto ret = CreateSurface(); if(ret != 0) { OVDBG_ERROR("%s: createLogoSurface failed!", __func__); return ret; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemDateAndTime::UpdateAndDraw() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t ret = 0; if(!dirty_) return ret; struct timeval tv; time_t now_time; struct tm *time; char date_buf[40]; char time_buf[40]; gettimeofday(&tv, nullptr); now_time = tv.tv_sec; OVDBG_VERBOSE("%s: curr time %ld prev time %ld", __func__, now_time, prev_time_); if (prev_time_ == now_time) { MarkDirty(true); return ret; } prev_time_ = now_time; time = localtime(&now_time); switch(date_time_type_.date_format) { case OverlayDateFormatType::kYYYYMMDD: strftime(date_buf, sizeof date_buf, "%Y/%m/%d", time); break; case OverlayDateFormatType::kMMDDYYYY: default: strftime(date_buf, sizeof date_buf, "%m/%d/%Y", time); break; } switch(date_time_type_.time_format) { case OverlayTimeFormatType::kHHMMSS_24HR: strftime(time_buf, sizeof time_buf, "%H:%M:%S", time); break; case OverlayTimeFormatType::kHHMMSS_AMPM: strftime(time_buf, sizeof time_buf, "%r", time); break; case OverlayTimeFormatType::kHHMM_24HR: strftime(time_buf, sizeof time_buf, "%H:%M", time); break; case OverlayTimeFormatType::kHHMM_AMPM: default: strftime(time_buf, sizeof time_buf, "%I:%M %p", time); break; } OVDBG_VERBOSE("%s: date:time (%s:%s)", __func__, date_buf, time_buf); double x_date, x_time, y_date, y_time; x_date = x_time = y_date = y_time = 0.0; SyncStart(surface_.ion_fd_); #if USE_CAIRO // Clear the privous drawn contents. ClearSurface(); cairo_select_font_face(cr_context_, "@cairo:Georgia", CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_NORMAL); cairo_set_font_size (cr_context_, kTextSize); cairo_set_antialias (cr_context_, CAIRO_ANTIALIAS_BEST); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_font_extents_t font_extent; cairo_font_extents (cr_context_, &font_extent); OVDBG_VERBOSE("%s: ascent=%f, descent=%f, height=%f, max_x_advance=%f," " max_y_advance = %f", __func__, font_extent.ascent, font_extent.descent, font_extent.height, font_extent.max_x_advance, font_extent.max_y_advance); cairo_text_extents_t date_text_extents; cairo_text_extents (cr_context_, date_buf, &date_text_extents); OVDBG_VERBOSE("%s: Date: te.x_bearing=%f, te.y_bearing=%f, te.width=%f," " te.height=%f, te.x_advance=%f, te.y_advance=%f", __func__, date_text_extents.x_bearing, date_text_extents.y_bearing, date_text_extents.width, date_text_extents.height, date_text_extents.x_advance, date_text_extents.y_advance); cairo_font_options_t *options; options = cairo_font_options_create (); cairo_font_options_set_antialias (options, CAIRO_ANTIALIAS_DEFAULT); cairo_set_font_options (cr_context_, options); cairo_font_options_destroy (options); //(0,0) is at topleft corner of draw buffer. x_date = (surface_.width_ - date_text_extents.width) / 2.0; y_date = std::max(surface_.height_ / 2.0, date_text_extents.height); OVDBG_VERBOSE("%s: x_date=%f, y_date=%f, ref=%f", __func__, x_date, y_date, date_text_extents.height - (font_extent.descent/2.0)); cairo_move_to (cr_context_, x_date, y_date); // Draw date. RGBAValues text_color; memset(&text_color, 0x0, sizeof text_color); ExtractColorValues(text_color_, &text_color); cairo_set_source_rgba (cr_context_, text_color.red, text_color.green, text_color.blue, text_color.alpha); cairo_show_text (cr_context_, date_buf); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); // Draw time. cairo_text_extents_t time_text_extents; cairo_text_extents (cr_context_, time_buf, &time_text_extents); OVDBG_VERBOSE("%s: Time: te.x_bearing=%f, te.y_bearing=%f, te.width=%f," " te.height=%f, te.x_advance=%f, te.y_advance=%f", __func__, time_text_extents.x_bearing, time_text_extents.y_bearing, time_text_extents.width, time_text_extents.height, time_text_extents.x_advance, time_text_extents.y_advance); // Calculate the x_time to draw the time text extact middle of buffer. // Use x_width which usually few pixel less than the width of the actual // drawn text. x_time = (surface_.width_ - time_text_extents.width) / 2.0; y_time = y_date + date_text_extents.height; cairo_move_to (cr_context_, x_time, y_time); cairo_show_text (cr_context_, time_buf); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_surface_flush(cr_surface_); cairo_surface_mark_dirty(cr_surface_); #elif USE_SKIA #ifndef DEBUG_BACKGROUND_SURFACE canvas_->clear(SK_AlphaOPAQUE); #else canvas_->clear(SK_ColorDKGRAY); #endif const char* delm = " : "; std::string data_time_buf; data_time_buf += date_buf; data_time_buf += delm; data_time_buf += time_buf; SkPaint paint; paint.setColor(text_color_); paint.setTextSize(SkIntToScalar(kTextSize)); paint.setAntiAlias(false); paint.setTextScaleX(1); SkString dateText(data_time_buf.c_str(), data_time_buf.size()); y_date = DATETIME_TEXT_BUF_HEIGHT - kTextSize; canvas_->drawText(dateText.c_str(), dateText.size(), x_date, y_date, paint); canvas_->flush(); #endif SyncEnd(surface_.ion_fd_); MarkDirty(true); OVDBG_VERBOSE("%s: Exit", __func__); return ret; } void OverlayItemDateAndTime::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s:Enter ",__func__); DrawInfo draw_info; memset(&draw_info, 0x0, sizeof(DrawInfo)); draw_info.width = width_; draw_info.height = height_; int32_t xMargin = targetWidth * OVERLAYITEM_X_MARGIN_PERCENT/100; int32_t yMargin = targetHeight * OVERLAYITEM_Y_MARGIN_PERCENT/100; int32_t x = 0; int32_t y = 0; //(0,0) is at topleft corner. switch (location_type_) { case OverlayLocationType::kTopLeft: x = xMargin; y = yMargin; break; case OverlayLocationType::kTopRight: x = targetWidth - (draw_info.width + xMargin); y = yMargin; break; case OverlayLocationType::kCenter: x = (targetWidth - draw_info.width)/2; y = (targetHeight - draw_info.height)/2; break; case OverlayLocationType::kBottomLeft: x = xMargin; y = targetHeight - (draw_info.height + yMargin); break; case OverlayLocationType::kBottomRight: x = targetWidth - (draw_info.width + xMargin); y = targetHeight - (draw_info.height + yMargin); break; case OverlayLocationType::kRandom: x = x_; y = y_; break; case OverlayLocationType::kNone: default: x = x_; y = y_; break; } draw_info.x = x; draw_info.y = y; #ifdef OVERLAY_OPEN_CL_BLIT draw_info.mask = surface_.cl_buffer_; draw_info.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info); OVDBG_VERBOSE("%s:Exit ",__func__); } void OverlayItemDateAndTime::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kDateType; param.location = location_type_; param.color = text_color_; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; param.date_time.date_format = date_time_type_.date_format; param.date_time.time_format = date_time_type_.time_format; OVDBG_VERBOSE("%s:Exit ",__func__); } int32_t OverlayItemDateAndTime::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } location_type_ = param.location; text_color_ = param.color; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; date_time_type_.date_format = param.date_time.date_format; date_time_type_.time_format = param.date_time.time_format; if (width_ != param.dst_rect.width || height_ != param.dst_rect.height) { width_ = param.dst_rect.width; height_ = param.dst_rect.height; prev_time_ = 0; // Create surface with the same aspect ratio surface_.width_ = ROUND_TO(kCairoBufferMinWidth, 16); surface_.height_ = kCairoBufferMinWidth * height_ / width_; // Recalculate if surface height is less than minimum if (surface_.height_ < kCairoBufferMinHeight) { surface_.height_ = kCairoBufferMinHeight; surface_.width_ = ROUND_TO(kCairoBufferMinHeight * width_ / height_, 16); // recalculated height according to aligned width surface_.height_ = surface_.width_ * height_ / width_; } OVDBG_INFO("%s: New Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); DestroySurface(); ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return ret; } } OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemDateAndTime::CreateSurface() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t ret = 0; int32_t format; int32_t size = width_ * height_ * 4; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } OVDBG_INFO("%s: ION memory allocated fd = %d",__func__,mem_info.fd); #if USE_CAIRO cr_surface_ = cairo_image_surface_create_for_data(static_cast (mem_info.vaddr), CAIRO_FORMAT_ARGB32, surface_.width_, surface_.height_, surface_.width_ * 4); assert (cr_surface_ != nullptr); cr_context_ = cairo_create (cr_surface_); assert (cr_context_ != nullptr); #elif USE_SKIA //Create Skia canvas outof ION memory. SkImageInfo imageInfo = SkImageInfo::Make(width_, height_, kRGBA_8888_SkColorType, kPremul_SkAlphaType); #ifdef ANDROID_O_OR_ABOVE canvas_ = (SkCanvas::MakeRasterDirect(imageInfo, mem_info.vaddr, width_ *4)).release(); #else canvas_ = SkCanvas::NewRasterDirect(imageInfo, mem_info.vaddr, width_ *4); #endif if(!canvas_) { OVDBG_ERROR("%s: Skia Creation failed!!",__func__); goto ERROR; } #endif //Draw system time on Skia canvas. UpdateAndDraw(); #if USE_CAIRO format = C2D_COLOR_FORMAT_8888_ARGB; #elif USE_SKIA format = C2D_FORMAT_SWAP_ENDIANNESS | C2D_COLOR_FORMAT_8888_RGBA; #endif ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; return ret; } #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemBoundingBox::OverlayItemBoundingBox(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kBoundingBox, blit), bbox_name_(), text_height_(0) { OVDBG_VERBOSE("%s: Enter", __func__); if (blit.get()) { // Create local instance of blit kernel text_surface_.blit_inst_ = blit->AddInstance(); } OVDBG_VERBOSE("%s: Exit", __func__); }; #else // OVERLAY_OPEN_CL_BLIT OverlayItemBoundingBox::OverlayItemBoundingBox(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kBoundingBox), bbox_name_(), text_height_(0) { OVDBG_VERBOSE("%s: Enter", __func__); OVDBG_VERBOSE("%s: Exit", __func__); }; #endif // OVERLAY_OPEN_CL_BLIT OverlayItemBoundingBox::~OverlayItemBoundingBox() { OVDBG_INFO("%s: Enter", __func__); DestroyTextSurface(); OVDBG_INFO("%s: Exit", __func__); } int32_t OverlayItemBoundingBox::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); if ((param.dst_rect.width <= 0) || (param.dst_rect.height <= 0)) { return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; bbox_color_ = param.color; surface_.width_ = kBoxBuffWidth; surface_.height_ = ROUND_TO((surface_.width_ * height_) / width_, 2); OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); #if USE_CAIRO text_surface_.width_ = 320; text_surface_.height_ = 80; box_stroke_width_ = (kStrokeWidth * surface_.width_ + width_ - 1) / width_; char prop_val[PROPERTY_VALUE_MAX]; property_get(PROP_BOX_STROKE_WIDTH, prop_val, "4"); box_stroke_width_ = (static_cast(atoi(prop_val)) > box_stroke_width_) ? static_cast(atoi(prop_val)) : box_stroke_width_; #endif int32_t textLen = strlen(param.bounding_box.box_name); int32_t textLimit = std::min(textLen + 1, kTextLimit); bbox_name_.setTo(param.bounding_box.box_name, textLimit); auto ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return NO_INIT; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemBoundingBox::UpdateAndDraw() { OVDBG_VERBOSE("%s: Enter ", __func__); int32_t ret = 0; if(!dirty_) { OVDBG_DEBUG("%s: Item is not dirty! Don't draw!", __func__); return ret; } // First text is drawn. // ---------- // | TEXT | // ---------- // Then bounding box is drawn // ---------- // | | // | BOX | // | | // ---------- SyncStart(surface_.ion_fd_); SyncStart(text_surface_.ion_fd_); #if USE_CAIRO OVDBG_INFO("%s: Draw bounding box and text!", __func__); ClearSurface(); ClearTextSurface(); // Draw text first. cairo_select_font_face(text_cr_context_, "@cairo:Georgia", CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_BOLD); cairo_set_font_size (text_cr_context_, kTextSize); cairo_set_antialias(text_cr_context_, CAIRO_ANTIALIAS_BEST); cairo_font_extents_t font_extents; cairo_font_extents (text_cr_context_, &font_extents); OVDBG_VERBOSE("%s: BBox Font: ascent=%f, descent=%f, height=%f, " "max_x_advance=%f, max_y_advance = %f", __func__, font_extents.ascent, font_extents.descent, font_extents.height, font_extents.max_x_advance, font_extents.max_y_advance); cairo_text_extents_t text_extents; cairo_text_extents (text_cr_context_, bbox_name_.string(), &text_extents); OVDBG_VERBOSE("%s: BBox Text: te.x_bearing=%f, te.y_bearing=%f, te.width=%f," " te.height=%f, te.x_advance=%f, te.y_advance=%f", __func__, text_extents.x_bearing, text_extents.y_bearing, text_extents.width, text_extents.height, text_extents.x_advance, text_extents.y_advance); cairo_font_options_t *options; options = cairo_font_options_create (); cairo_font_options_set_antialias (options, CAIRO_ANTIALIAS_BEST); cairo_set_font_options (text_cr_context_, options); cairo_font_options_destroy (options); double x_text = 0.0; double y_text = text_extents.height + (font_extents.descent/2.0); OVDBG_VERBOSE("%s: x_text=%f, y_text=%f", __func__, x_text, y_text); cairo_move_to (text_cr_context_, x_text, y_text); RGBAValues bbox_color; memset(&bbox_color, 0x0, sizeof bbox_color); ExtractColorValues(bbox_color_, &bbox_color); cairo_set_source_rgba (text_cr_context_, bbox_color.red, bbox_color.green, bbox_color.blue, bbox_color.alpha); cairo_show_text (text_cr_context_, bbox_name_.string()); assert(CAIRO_STATUS_SUCCESS == cairo_status(text_cr_context_)); cairo_surface_flush (text_cr_surface_); // Draw rectangle cairo_set_line_width (cr_context_, box_stroke_width_); cairo_set_source_rgba (cr_context_, bbox_color.red, bbox_color.green, bbox_color.blue, bbox_color.alpha); cairo_rectangle (cr_context_, box_stroke_width_ / 2, box_stroke_width_ / 2, surface_.width_ - box_stroke_width_, surface_.height_ - box_stroke_width_); cairo_stroke (cr_context_); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_surface_flush (cr_surface_); #elif USE_SKIA if (width_ > 0 && height_ > 0) { #ifndef DEBUG_BACKGROUND_SURFACE canvas_->clear(SK_AlphaOPAQUE); #else canvas_->clear(SK_ColorDKGRAY); #endif SkPaint paintBox, paintText; paintText.setColor(bbox_color_); paintBox.setColor(bbox_color_); paintText.setTextSize(SkIntToScalar(kTextSize)); paintText.setAntiAlias(true); paintBox.setStrokeWidth(box_stroke_width); paintBox.setStyle(SkPaint::kStroke_Style); int32_t xText = 0, yText = 0; int32_t xBBox = 0, yBBox = 0; if(bbox_name_.length() > 1) { SkString text(bbox_name_.string(), bbox_name_.length()); // Text size is always 20% of buffer height. yText = surface_.height_ * kTextPercent / 100; // Margin between text and bouding box rect. yText = yText - kTextMargin; canvas_->drawText(text.c_str(), text.size(), xText, yText, paintText); } yBBox = yText > 0 ? kTextSize : 0; int32_t boxWidth = kBoxBuffWidth; int32_t boxHeight = surface_.height_ - yBBox; text_surface_.height_ = yText; canvas_->drawRect(SkRect::MakeXYWH(xBBox, yBBox, boxWidth, boxHeight), paintBox); canvas_->flush(); } #endif SyncEnd(surface_.ion_fd_); SyncEnd(text_surface_.ion_fd_); MarkDirty(false); OVDBG_VERBOSE("%s: Exit", __func__); return ret; } void OverlayItemBoundingBox::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s: Enter", __func__); DrawInfo draw_info_bbox; memset(&draw_info_bbox, 0x0, sizeof(DrawInfo)); draw_info_bbox.x = x_; draw_info_bbox.y = y_; draw_info_bbox.width = width_; draw_info_bbox.height = height_; #ifdef OVERLAY_OPEN_CL_BLIT draw_info_bbox.mask = surface_.cl_buffer_; draw_info_bbox.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info_bbox.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info_bbox); #if USE_CAIRO DrawInfo draw_info_text; memset(&draw_info_text, 0x0, sizeof(DrawInfo)); draw_info_text.x = x_ + kTextMargin; draw_info_text.y = y_ + kTextMargin; draw_info_text.width = (targetWidth * kTextPercent) / 100; draw_info_text.height = (draw_info_text.width * text_surface_.height_) / text_surface_.width_; #ifdef OVERLAY_OPEN_CL_BLIT draw_info_text.mask = text_surface_.cl_buffer_; draw_info_text.blit_inst = text_surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info_text.c2dSurfaceId = text_surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info_text); #endif OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemBoundingBox::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kBoundingBox; param.location = OverlayLocationType::kNone; param.color = bbox_color_; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; std::string str(bbox_name_.string()); str.copy(param.bounding_box.box_name, bbox_name_.length()); OVDBG_VERBOSE("%s:Exit ",__func__); } void OverlayItemBoundingBox::ClearTextSurface() { #if USE_CAIRO RGBAValues bg_color; memset(&bg_color, 0x0, sizeof bg_color); // Painting entire surface with background color or with fully transparent // color doesn't work since cairo uses the OVER compositing operator // by default, and blending something entirely transparent OVER something // else has no effect at all until compositing operator is changed to SOURCE, // the SOURCE operator copies both color and alpha values directly from the // source to the destination instead of blending. #ifdef DEBUG_BACKGROUND_SURFACE ExtractColorValues(BG_DEBUG_COLOR, &bg_color); cairo_set_source_rgba(text_cr_context_, bg_color.red, bg_color.green, bg_color.blue, bg_color.alpha); cairo_set_operator(text_cr_context_, CAIRO_OPERATOR_SOURCE); #else cairo_set_operator(text_cr_context_, CAIRO_OPERATOR_CLEAR); #endif cairo_paint(text_cr_context_); cairo_surface_flush(text_cr_surface_); cairo_set_operator(text_cr_context_, CAIRO_OPERATOR_OVER); assert(CAIRO_STATUS_SUCCESS == cairo_status(text_cr_context_)); cairo_surface_mark_dirty(text_cr_surface_); #endif } int32_t OverlayItemBoundingBox::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; if((param.dst_rect.width <= 0) || (param.dst_rect.height <= 0)) { return BAD_VALUE; } if(param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; if (surface_.height_ != ROUND_TO((surface_.width_ * height_) / width_, 2)) { surface_.height_ = ROUND_TO((surface_.width_ * height_) / width_, 2); DestroySurface(); DestroyTextSurface(); ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return ret; } } #if USE_CAIRO if (box_stroke_width_ != (kStrokeWidth * surface_.width_ + width_ - 1) / width_) { box_stroke_width_ = (kStrokeWidth * surface_.width_ + width_ - 1) / width_; MarkDirty(true); } #endif if (bbox_color_ != param.color) { bbox_color_ = param.color; MarkDirty(true); } if (strcmp(bbox_name_.string(), param.bounding_box.box_name)) { bbox_name_.clear(); int32_t textLen = strlen(param.bounding_box.box_name); int32_t textLimit = std::min(textLen + 1, kTextLimit); bbox_name_.setTo(param.bounding_box.box_name, textLimit); MarkDirty(true); } OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemBoundingBox::CreateSurface() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t size = surface_.width_ * surface_.height_ * 4; int32_t format; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); auto ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } OVDBG_DEBUG("%s: Ion memory allocated fd(%d)", __func__, mem_info.fd); #if USE_CAIRO cr_surface_ = cairo_image_surface_create_for_data(static_cast (mem_info.vaddr), CAIRO_FORMAT_ARGB32, surface_.width_, surface_.height_, surface_.width_ * 4); assert (cr_surface_ != nullptr); cr_context_ = cairo_create (cr_surface_); assert (cr_context_ != nullptr); #elif USE_SKIA //Create Skia canvas outof ION memory. SkImageInfo imageInfo = SkImageInfo::Make(kBoxBuffWidth, surface_.height_, kRGBA_8888_SkColorType, kPremul_SkAlphaType); #ifdef ANDROID_O_OR_ABOVE canvas_ = (SkCanvas::MakeRasterDirect(imageInfo, mem_info.vaddr, kBoxBuffWidth *4)).release(); #else canvas_ = SkCanvas::NewRasterDirect(imageInfo, mem_info.vaddr, kBoxBuffWidth *4); #endif if(!canvas_) { OVDBG_ERROR("%s: Skia Creation failed!!", __func__); goto ERROR; } #endif #if USE_CAIRO format = C2D_COLOR_FORMAT_8888_ARGB; #elif USE_SKIA format = C2D_FORMAT_SWAP_ENDIANNESS | C2D_COLOR_FORMAT_8888_RGBA; #endif ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } #if USE_CAIRO // Setup text surface size = text_surface_.width_ * text_surface_.height_ * 4; memset(&mem_info, 0x0, sizeof(IonMemInfo)); ret = AllocateIonMemory(mem_info, size); if (ret) { OVDBG_ERROR("%s:AllocateIonMemory failed", __func__); return ret; } OVDBG_INFO("%s: Ion memory allocated fd = %d", __func__, mem_info.fd); text_cr_surface_ = cairo_image_surface_create_for_data( static_cast(mem_info.vaddr), CAIRO_FORMAT_ARGB32, text_surface_.width_, text_surface_.height_, text_surface_.width_ * 4); assert(text_cr_surface_ != nullptr); text_cr_context_ = cairo_create(text_cr_surface_); assert(text_cr_context_ != nullptr); format = C2D_COLOR_FORMAT_8888_ARGB; ret = MapOverlaySurface(text_surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } #endif OVDBG_VERBOSE("%s: Exit", __func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; #if USE_CAIRO close(text_surface_.ion_fd_); text_surface_.ion_fd_ = -1; #endif return ret; } void OverlayItemBoundingBox::DestroyTextSurface() { bbox_name_.clear(); #if USE_CAIRO UnMapOverlaySurface(text_surface_); FreeIonMemory(text_surface_.vaddr_, text_surface_.ion_fd_, text_surface_.size_); if (text_cr_surface_) { cairo_surface_destroy(text_cr_surface_); } if (text_cr_context_) { cairo_destroy(text_cr_context_); } #endif } OverlayItemText::~OverlayItemText() { OVDBG_VERBOSE("%s:Enter ", __func__); OVDBG_VERBOSE("%s:Exit ", __func__); } int32_t OverlayItemText::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } location_type_ = param.location; text_color_ = param.color; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; text_ = param.user_text; surface_.width_ = std::max(kCairoBufferMinWidth, width_); surface_.width_ = ROUND_TO(surface_.width_, 16); surface_.height_ = std::max(kCairoBufferMinHeight, height_); OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); auto ret = CreateSurface(); if(ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return ret; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemText::UpdateAndDraw() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t ret = 0; if(!dirty_) return ret; SyncStart(surface_.ion_fd_); // Split the Text based on new line character. vector < string > res; stringstream ss(text_); // Turn the string into a stream. string tok; while (getline(ss, tok, '\n')) { OVDBG_INFO("%s: UserText:: Substring: %s", __func__, tok.c_str()); res.push_back(tok); } #if USE_CAIRO ClearSurface(); cairo_select_font_face(cr_context_, "@cairo:Georgia", CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_NORMAL); cairo_set_font_size (cr_context_, kTextSize); cairo_set_antialias (cr_context_, CAIRO_ANTIALIAS_BEST); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_font_extents_t font_extent; cairo_font_extents (cr_context_, &font_extent); OVDBG_VERBOSE("%s: ascent=%f, descent=%f, height=%f, max_x_advance=%f," " max_y_advance = %f", __func__, font_extent.ascent, font_extent.descent, font_extent.height, font_extent.max_x_advance, font_extent.max_y_advance); cairo_text_extents_t text_extents; cairo_text_extents (cr_context_, text_.c_str(), &text_extents); OVDBG_VERBOSE("%s: Custom text: te.x_bearing=%f, te.y_bearing=%f," " te.width=%f, te.height=%f, te.x_advance=%f, te.y_advance=%f", __func__, text_extents.x_bearing, text_extents.y_bearing, text_extents.width, text_extents.height, text_extents.x_advance, text_extents.y_advance); cairo_font_options_t *options; options = cairo_font_options_create (); cairo_font_options_set_antialias (options, CAIRO_ANTIALIAS_DEFAULT); cairo_set_font_options (cr_context_, options); cairo_font_options_destroy (options); //(0,0) is at topleft corner of draw buffer. double x_text = 0.0; double y_text = 0.0; // Draw Text. RGBAValues text_color; memset(&text_color, 0x0, sizeof text_color); ExtractColorValues(text_color_, &text_color); cairo_set_source_rgba (cr_context_, text_color.red, text_color.green, text_color.blue, text_color.alpha); for (string substr: res) { y_text += text_extents.height + (font_extent.descent/2.0); OVDBG_VERBOSE("%s: x_text=%f, y_text=%f", __func__, x_text, y_text); cairo_move_to (cr_context_, x_text, y_text); cairo_show_text (cr_context_, substr.c_str()); assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); } cairo_surface_flush(cr_surface_); #elif USE_SKIA #ifndef DEBUG_BACKGROUND_SURFACE canvas_->clear(SK_AlphaOPAQUE); #else canvas_->clear(SK_ColorDKGRAY); #endif SkPaint paint; paint.setColor(text_color_); paint.setTextSize(SkIntToScalar(kTextSize)); paint.setAntiAlias(true); int32_t x = 0; int32_t y = 0; for (string substr: res) { // This op is required to maintain proper gap between 2 lines. y += paint.getTextSize() * 1.2f; SkString skText(substr.c_str(), substr.length()); canvas_->drawText(skText.c_str(), skText.size(), x, y, paint); } canvas_->flush(); #endif SyncEnd(surface_.ion_fd_); dirty_ = false; OVDBG_VERBOSE("%s: Exit", __func__); return ret; } void OverlayItemText::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s: Enter", __func__); DrawInfo draw_info; memset(&draw_info, 0x0, sizeof(DrawInfo)); draw_info.width = width_; draw_info.height = height_; int32_t xMargin = targetWidth * OVERLAYITEM_X_MARGIN_PERCENT/100; int32_t yMargin = targetHeight * OVERLAYITEM_Y_MARGIN_PERCENT/100; int32_t x = 0; int32_t y = 0; // (0,0) is at topleft corner. switch (location_type_) { case OverlayLocationType::kTopLeft: x = xMargin; y = yMargin; break; case OverlayLocationType::kTopRight: x = targetWidth - (draw_info.width + xMargin); y = yMargin; break; case OverlayLocationType::kCenter: x = (targetWidth - draw_info.width)/2; y = (targetHeight - draw_info.height)/2; break; case OverlayLocationType::kBottomLeft: x = xMargin; y = targetHeight - (draw_info.height + yMargin); break; case OverlayLocationType::kBottomRight: x = targetWidth - (draw_info.width + xMargin); y = targetHeight - (draw_info.height + yMargin); break; case OverlayLocationType::kRandom: x = x_; y = y_; break; case OverlayLocationType::kNone: default: x = x_; y = y_; break; } draw_info.x = x; draw_info.y = y; #ifdef OVERLAY_OPEN_CL_BLIT draw_info.mask = surface_.cl_buffer_; draw_info.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info); OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemText::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kUserText; param.location = location_type_; param.color = text_color_; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; int size = std::min(text_.length(), sizeof(param.user_text) - 1); text_.copy(param.user_text, size); param.user_text[size + 1] = '\0'; OVDBG_VERBOSE("%s:Exit ", __func__); } int32_t OverlayItemText::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } location_type_ = param.location; x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; if (width_ != param.dst_rect.width || height_ != param.dst_rect.height) { width_ = param.dst_rect.width; height_ = param.dst_rect.height; surface_.width_ = std::max(kCairoBufferMinWidth, width_); surface_.width_ = ROUND_TO(surface_.width_, 16); surface_.height_ = std::max(kCairoBufferMinHeight, height_); OVDBG_INFO("%s: New Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); DestroySurface(); ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return ret; } } if (text_color_ != param.color) { text_color_ = param.color; MarkDirty(true); } if (text_.compare(param.user_text)) { text_ = param.user_text; MarkDirty(true); } OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemText::CreateSurface() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t size = width_ * height_ * 4; int32_t format; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); auto ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } OVDBG_INFO("%s: Ion memory allocated fd = %d", __func__, mem_info.fd); #if USE_CAIRO cr_surface_ = cairo_image_surface_create_for_data(static_cast (mem_info.vaddr), CAIRO_FORMAT_ARGB32, surface_.width_, surface_.height_, surface_.width_ * 4); assert (cr_surface_ != nullptr); cr_context_ = cairo_create (cr_surface_); assert (cr_context_ != nullptr); #elif USE_SKIA //Create Skia canvas outof ION memory. SkImageInfo imageInfo = SkImageInfo::Make(width_, height_, kRGBA_8888_SkColorType, kPremul_SkAlphaType); #ifdef ANDROID_O_OR_ABOVE canvas_ = (SkCanvas::MakeRasterDirect(imageInfo, mem_info.vaddr, width_ * 4)).release(); #else canvas_ = SkCanvas::NewRasterDirect(imageInfo, mem_info.vaddr, width_ * 4); #endif if(!canvas_) { OVDBG_ERROR("%s: Skia Creation failed!!",__func__); goto ERROR; } #endif //Draw system time on Skia canvas. UpdateAndDraw(); #if USE_CAIRO format = C2D_COLOR_FORMAT_8888_ARGB; #elif USE_SKIA format = C2D_FORMAT_SWAP_ENDIANNESS | C2D_COLOR_FORMAT_8888_RGBA; #endif ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } OVDBG_INFO("%s: Exit", __func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; return ret; } int32_t OverlayItemPrivacyMask::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); if((param.dst_rect.width <= 0) || (param.dst_rect.height <= 0)) { return BAD_VALUE; } if(param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; mask_color_ = param.color; config_ = param.privacy_mask; surface_.width_ = std::min(width_, kMaskBoxBufWidth); surface_.height_ = (surface_.width_ * height_) / width_; surface_.height_ = ROUND_TO(surface_.height_, 2); OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); auto ret = CreateSurface(); if(ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return NO_INIT; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemPrivacyMask::UpdateAndDraw() { OVDBG_VERBOSE("%s: Enter ", __func__); int32_t ret = 0; if(!dirty_) { OVDBG_DEBUG("%s: Item is not dirty! Don't draw!", __func__); return ret; } SyncStart(surface_.ion_fd_); #if USE_CAIRO ClearSurface(); RGBAValues mask_color; ExtractColorValues(mask_color_, &mask_color); cairo_set_source_rgba(cr_context_, mask_color.red, mask_color.green, mask_color.blue, mask_color.alpha); switch (config_.type) { case OverlayPrivacyMaskType::kRectangle: { uint32_t x = (config_.rectangle.start_x * surface_.width_) / width_; uint32_t y = (config_.rectangle.start_y * surface_.width_) / width_; uint32_t w = (config_.rectangle.width * surface_.width_) / width_; uint32_t h = (config_.rectangle.height * surface_.width_) / width_; cairo_rectangle(cr_context_, x, y, w, h); cairo_fill(cr_context_); } break; case OverlayPrivacyMaskType::kInverseRectangle: { uint32_t x = (config_.rectangle.start_x * surface_.width_) / width_; uint32_t y = (config_.rectangle.start_y * surface_.width_) / width_; uint32_t w = (config_.rectangle.width * surface_.width_) / width_; uint32_t h = (config_.rectangle.height * surface_.width_) / width_; cairo_rectangle(cr_context_, 0, 0, surface_.width_, surface_.height_); cairo_rectangle(cr_context_, x, y, w, h); cairo_set_fill_rule(cr_context_, CAIRO_FILL_RULE_EVEN_ODD); cairo_fill(cr_context_); } break; case OverlayPrivacyMaskType::kCircle: { uint32_t cx = (config_.circle.center_x * surface_.width_) / width_; uint32_t cy = (config_.circle.center_y * surface_.height_) / height_; uint32_t rad = (config_.circle.radius * surface_.width_) / width_; cairo_arc(cr_context_, cx, cy, rad, 0, 2 * M_PI); cairo_fill(cr_context_); } break; case OverlayPrivacyMaskType::kInverseCircle: { uint32_t cx = (config_.circle.center_x * surface_.width_) / width_; uint32_t cy = (config_.circle.center_y * surface_.height_) / height_; uint32_t rad = (config_.circle.radius * surface_.width_) / width_; cairo_arc(cr_context_, cx, cy, rad, 0, 2 * M_PI); cairo_rectangle(cr_context_, 0, 0, surface_.width_, surface_.height_); cairo_set_fill_rule(cr_context_, CAIRO_FILL_RULE_EVEN_ODD); cairo_fill(cr_context_); } break; default: OVDBG_DEBUG("%s: Unsupported privacy mask type %d", __func__, config_.type); return -1; } assert(CAIRO_STATUS_SUCCESS == cairo_status(cr_context_)); cairo_surface_flush (cr_surface_); #elif USE_SKIA //Create Skia canvas outof ION memory. SkPaint paintBox; #ifndef DEBUG_BACKGROUND_SURFACE canvas_->clear(SK_AlphaOPAQUE); #else canvas_->clear(SK_ColorDKGRAY); #endif paintBox.setColor(mask_color_); paintBox.setStyle(SkPaint::kFill_Style); //For blurring effect #ifdef ANDROID_O_OR_ABOVE paintBox.setMaskFilter(SkBlurMaskFilter::Make(kNormal_SkBlurStyle,5.0f, 0)); #else paintBox.setMaskFilter(SkBlurMaskFilter::Create(kNormal_SkBlurStyle,5.0f, 0)); #endif OVDBG_VERBOSE("x %d y %d width %d height %d", x_, y_, width_, height_); canvas_->drawRect(SkRect::MakeXYWH(0, 0, width_, height_), paintBox); canvas_->flush(); #endif SyncEnd(surface_.ion_fd_); // Don't paint until params gets updated by app(UpdateParameters). MarkDirty(false); return OK; } void OverlayItemPrivacyMask::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s: Enter", __func__); DrawInfo draw_info; memset(&draw_info, 0x0, sizeof(DrawInfo)); draw_info.x = x_; draw_info.y = y_; draw_info.width = width_; draw_info.height = height_; #ifdef OVERLAY_OPEN_CL_BLIT draw_info.mask = surface_.cl_buffer_; draw_info.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info); OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemPrivacyMask::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kPrivacyMask; param.location = OverlayLocationType::kNone; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; param.color = mask_color_; OVDBG_VERBOSE("%s:Exit ",__func__); } int32_t OverlayItemPrivacyMask::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; if((param.dst_rect.width <= 0) || (param.dst_rect.height <= 0)) { return BAD_VALUE; } if(param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; mask_color_ = param.color; config_ = param.privacy_mask; surface_.width_ = kMaskBoxBufWidth; surface_.height_ = (surface_.width_ * height_) / width_; surface_.height_ = ROUND_TO(surface_.height_, 2); OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); // Mark dirty, updated contents would be re-painted in next paint cycle. MarkDirty(true); OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemPrivacyMask::CreateSurface() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t size = surface_.width_ * surface_.height_ * 4; int32_t format; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); auto ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } OVDBG_DEBUG("%s: Ion memory allocated fd(%d)", __func__, mem_info.fd); #if USE_CAIRO cr_surface_ = cairo_image_surface_create_for_data(static_cast (mem_info.vaddr), CAIRO_FORMAT_ARGB32, surface_.width_, surface_.height_, surface_.width_ * 4); assert (cr_surface_ != nullptr); cr_context_ = cairo_create (cr_surface_); assert (cr_context_ != nullptr); #elif USE_SKIA //Create Skia canvas outof ION memory. SkImageInfo imageInfo = SkImageInfo::Make(surface_.width_, surface_.heght_, kRGBA_8888_SkColorType, kPremul_SkAlphaType); #ifdef ANDROID_O_OR_ABOVE canvas_ = (SkCanvas::MakeRasterDirect(imageInfo, mem_info.vaddr, surface_.width_ *4)).release(); #else canvas_ = SkCanvas::NewRasterDirect(imageInfo, mem_info.vaddr, surface_.width_ *4); #endif if(!canvas_) { OVDBG_ERROR("%s: Skia Creation failed!!", __func__); goto ERROR; } #endif format = C2D_COLOR_FORMAT_8888_ARGB; ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; return ret; } int32_t OverlayItemGraph::Init(OverlayParam& param) { OVDBG_VERBOSE("%s: Enter", __func__); if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { OVDBG_ERROR("%s: failed: dim: %dx%d", __func__, param.dst_rect.width, param.dst_rect.height); return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { OVDBG_ERROR("%s: failed: x/y: %dx%d", __func__, param.dst_rect.start_x, param.dst_rect.start_y); return BAD_VALUE; } if (param.graph.points_count > OVERLAY_GRAPH_NODES_MAX_COUNT) { OVDBG_ERROR("%s: failed: points_count %d", __func__, param.graph.points_count); return BAD_VALUE; } if (param.graph.chain_count > OVERLAY_GRAPH_CHAIN_MAX_COUNT) { OVDBG_ERROR("%s: failed: chain_count %d", __func__, param.graph.chain_count); return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; graph_color_ = param.color; graph_ = param.graph; float scaled_width = static_cast(width_) / DOWNSCALE_FACTOR; float scaled_height = static_cast(height_) / DOWNSCALE_FACTOR; float aspect_ratio = scaled_width / scaled_height; OVDBG_INFO("%s: Graph(W:%dxH:%d), aspect_ratio(%f), scaled(W:%fxH:%f)", __func__, param.dst_rect.width, param.dst_rect.height, aspect_ratio, scaled_width, scaled_height); int32_t width = static_cast(round(scaled_width)); width = ROUND_TO(width, 16); // Round to multiple of 16. width = width > kGraphBufWidth ? width : kGraphBufWidth; int32_t height = (static_cast(width/aspect_ratio + 15)>> 4) << 4; height = height > kGraphBufHeight ? height : kGraphBufHeight; surface_.width_ = width; surface_.height_ = height; downscale_ratio_ = (float)width_ / (float)surface_.width_; OVDBG_INFO("%s: Offscreen buffer:(%dx%d)",__func__, surface_.width_, surface_.height_); auto ret = CreateSurface(); if (ret != 0) { OVDBG_ERROR("%s: CreateSurface failed!", __func__); return NO_INIT; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; } int32_t OverlayItemGraph::UpdateAndDraw() { OVDBG_VERBOSE("%s: Enter ", __func__); int32_t ret = 0; if(!dirty_) { OVDBG_DEBUG("%s: Item is not dirty! Don't draw!", __func__); return ret; } SyncStart(surface_.ion_fd_); #if USE_CAIRO OVDBG_INFO("%s: Draw graph!", __func__); ClearSurface(); RGBAValues bbox_color; memset(&bbox_color, 0x0, sizeof bbox_color); ExtractColorValues(graph_color_, &bbox_color); cairo_set_source_rgba (cr_context_, bbox_color.red, bbox_color.green, bbox_color.blue, bbox_color.alpha); cairo_set_line_width (cr_context_, kLineWidth); // draw key points for (int i = 0; i < graph_.points_count; i++) { if (graph_.points[i].x >= 0 && graph_.points[i].y >= 0) { cairo_arc (cr_context_, (uint32_t)((float) graph_.points[i].x / downscale_ratio_), (uint32_t)((float) graph_.points[i].y / downscale_ratio_), kDotRadius, 0, 2 * M_PI); cairo_fill (cr_context_); } } // draw links for (int i = 0; i < graph_.chain_count; i++) { cairo_move_to (cr_context_, (uint32_t)((float) graph_.points[graph_.chain[i][0]].x / downscale_ratio_), (uint32_t)((float) graph_.points[graph_.chain[i][0]].y / downscale_ratio_)); cairo_line_to (cr_context_, (uint32_t)((float) graph_.points[graph_.chain[i][1]].x / downscale_ratio_), (uint32_t)((float) graph_.points[graph_.chain[i][1]].y / downscale_ratio_)); cairo_stroke (cr_context_); } cairo_surface_flush (cr_surface_); #endif SyncEnd(surface_.ion_fd_); MarkDirty(false); OVDBG_VERBOSE("%s: Exit", __func__); return ret; } void OverlayItemGraph::GetDrawInfo(uint32_t targetWidth, uint32_t targetHeight, std::vector& draw_infos) { OVDBG_VERBOSE("%s: Enter", __func__); DrawInfo draw_info; memset(&draw_info, 0x0, sizeof(DrawInfo)); draw_info.x = x_; draw_info.y = y_; draw_info.width = width_; draw_info.height = height_; #ifdef OVERLAY_OPEN_CL_BLIT draw_info.mask = surface_.cl_buffer_; draw_info.blit_inst = surface_.blit_inst_; #else // OVERLAY_OPEN_CL_BLIT draw_info.c2dSurfaceId = surface_.c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT draw_infos.push_back(draw_info); OVDBG_VERBOSE("%s: Exit", __func__); } void OverlayItemGraph::GetParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); param.type = OverlayType::kGraph; param.location = OverlayLocationType::kNone; param.color = graph_color_; param.dst_rect.start_x = x_; param.dst_rect.start_y = y_; param.dst_rect.width = width_; param.dst_rect.height = height_; OVDBG_VERBOSE("%s:Exit ",__func__); } int32_t OverlayItemGraph::UpdateParameters(OverlayParam& param) { OVDBG_VERBOSE("%s:Enter ",__func__); int32_t ret = 0; if (param.dst_rect.width <= 0 || param.dst_rect.height <= 0) { OVDBG_ERROR("%s: failed: dim: %dx%d", __func__, param.dst_rect.width, param.dst_rect.height); return BAD_VALUE; } if (param.dst_rect.start_x < 0 || param.dst_rect.start_y < 0) { OVDBG_ERROR("%s: failed: x/y: %dx%d", __func__, param.dst_rect.start_x, param.dst_rect.start_y); return BAD_VALUE; } if (param.graph.points_count > OVERLAY_GRAPH_NODES_MAX_COUNT) { OVDBG_ERROR("%s: failed: points_count %d", __func__, param.graph.points_count); return BAD_VALUE; } if (param.graph.chain_count > OVERLAY_GRAPH_CHAIN_MAX_COUNT) { OVDBG_ERROR("%s: failed: chain_count %d", __func__, param.graph.chain_count); return BAD_VALUE; } x_ = param.dst_rect.start_x; y_ = param.dst_rect.start_y; width_ = param.dst_rect.width; height_ = param.dst_rect.height; graph_color_ = param.color; graph_ = param.graph; MarkDirty(true); OVDBG_VERBOSE("%s:Exit ",__func__); return ret; } int32_t OverlayItemGraph::CreateSurface() { OVDBG_VERBOSE("%s: Enter", __func__); int32_t size = surface_.width_ * surface_.height_ * 4; int32_t format; IonMemInfo mem_info; memset(&mem_info, 0x0, sizeof(IonMemInfo)); auto ret = AllocateIonMemory(mem_info, size); if(0 != ret) { OVDBG_ERROR("%s:AllocateIonMemory failed",__func__); return ret; } OVDBG_DEBUG("%s: Ion memory allocated fd(%d)", __func__, mem_info.fd); #if USE_CAIRO cr_surface_ = cairo_image_surface_create_for_data(static_cast (mem_info.vaddr), CAIRO_FORMAT_ARGB32, surface_.width_, surface_.height_, surface_.width_ * 4); assert (cr_surface_ != nullptr); cr_context_ = cairo_create (cr_surface_); assert (cr_context_ != nullptr); #endif #if USE_CAIRO format = C2D_COLOR_FORMAT_8888_ARGB; #endif ret = MapOverlaySurface(surface_, mem_info, format); if (ret) { OVDBG_ERROR("%s: Map failed!",__func__); goto ERROR; } OVDBG_VERBOSE("%s: Exit", __func__); return ret; ERROR: close(surface_.ion_fd_); surface_.ion_fd_ = -1; return ret; } }; // namespace overlay }; // namespace qmmf ================================================ FILE: qti_gst_plugins/qtioverlay/qtiqmmf_overlay/qmmf_overlay_item.h ================================================ /* * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of The Linux Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #pragma once #include #include #include #include #include #include #include #ifdef OVERLAY_OPEN_CL_BLIT #include #include #endif // OVERLAY_OPEN_CL_BLIT #include "common/utils/qmmf_condition.h" #if USE_SKIA #include #elif USE_CAIRO #include #endif namespace qmmf { namespace overlay { /** OVDBG_INFO, ERROR and WARN logs are enabled all the time by default. */ #define OVDBG_INFO(fmt, args...) ALOGD(fmt, ##args) #define OVDBG_ERROR(fmt, args...) ALOGE(fmt, ##args) #define OVDBG_WARN(fmt, args...) ALOGW(fmt, ##args) // Remove comment markers to define LOG_LEVEL_DEBUG for debugging-related logs //#define LOG_LEVEL_DEBUG // Remove comment markers to define LOG_LEVEL_VERBOSE for complete logs //#define LOG_LEVEL_VERBOSE #ifdef LOG_LEVEL_DEBUG #define OVDBG_DEBUG(fmt, args...) ALOGD(fmt, ##args) #else #define OVDBG_DEBUG(...) ((void)0) #endif #ifdef LOG_LEVEL_VERBOSE #define OVDBG_VERBOSE(fmt, args...) ALOGD(fmt, ##args) #else #define OVDBG_VERBOSE(...) ((void)0) #endif #define OVERLAYITEM_X_MARGIN_PERCENT 0.5 #define OVERLAYITEM_Y_MARGIN_PERCENT 0.5 #define MAX_LEN 128 #define MAX_OVERLAYS 10 #define BG_TRANSPARENT_COLOR 0xFFFFFF00 #define BG_DEBUG_COLOR 0xFFE5CC80 //Light gray. #define DOWNSCALE_FACTOR 4 #define BLIT_KERNEL "/usr/lib/qmmf/overlay_blit_kernel.cl" #define BLIT_KERNEL_NAME "overlay_cl" // Remove comment marker to enable backgroud surface drawing of overlay objects. //#define DEBUG_BACKGROUND_SURFACE // Remove comment marker to measure time taken in overlay drawing. //#define DEBUG_BLIT_TIME #define PROP_DUMP_BLOB_IMAGE "persist.qmmf.overlay.dump.blob" #define PROP_BOX_STROKE_WIDTH "persist.qmmf.overlay.stroke.width" #ifdef OVERLAY_OPEN_CL_BLIT struct OpenClFrame { cl_mem cl_buffer; cl_uint plane0_offset; cl_uint plane1_offset; cl_ushort stride0; cl_ushort stride1; cl_ushort swap_uv; }; struct OpenCLArgs { uint32_t width; uint32_t height; uint32_t x; uint32_t y; cl_mem mask; }; class OpenClKernel { public: OpenClKernel(const std::string &kernel_name) : kernel_name_(kernel_name), prog_(nullptr), kernel_(nullptr), kernel_dimensions_(2), local_size_{0, 0}, global_size_{0, 0}, global_offset_{0, 0} {} OpenClKernel(const OpenClKernel &other) : kernel_name_(other.kernel_name_), prog_(other.prog_), kernel_(nullptr), kernel_dimensions_(other.kernel_dimensions_), local_size_{0, 0}, global_size_{0, 0}, global_offset_{0, 0} {} ~OpenClKernel(); static std::shared_ptr New(const std::string &path_to_src, const std::string &name); std::shared_ptr AddInstance(); int32_t BuildProgram(const std::string &path_to_src); int32_t SetKernelArgs(OpenClFrame &frame, OpenCLArgs &args); int32_t RunCLKernel(bool wait_to_finish); static int32_t MapBuffer(cl_mem &cl_buffer, void *vaddr, int32_t fd, uint32_t size); static int32_t UnMapBuffer(cl_mem &cl_buffer); static int32_t MapImage(cl_mem &cl_buffer, void *vaddr, int32_t fd, size_t width, size_t height, uint32_t stride); static int32_t unMapImage(cl_mem &cl_buffer); private: static int32_t OpenCLInit(); static int32_t OpenCLDeInit(); int32_t CreateKernelInstance(); static void ClCompleteCallback(cl_event event, cl_int event_command_exec_status, void *user_data); std::string CreateCLKernelBuildLog(); static cl_device_id device_id_; static cl_context context_; static cl_command_queue command_queue_; static std::mutex lock_; static int32_t ref_count; std::string kernel_name_; cl_program prog_; cl_kernel kernel_; cl_uint kernel_dimensions_; size_t local_size_[2]; size_t global_size_[2]; size_t global_offset_[2]; static const uint32_t kWaitProcessTimeout = 2000000000; // 2 sec. struct SyncObject { bool done_; QCondition signal_; std::mutex lock_; } sync_; }; #endif // OVERLAY_OPEN_CL_BLIT struct DrawInfo { uint32_t width; uint32_t height; uint32_t x; uint32_t y; #ifdef OVERLAY_OPEN_CL_BLIT cl_mem mask; std::shared_ptr blit_inst; #else uint32_t c2dSurfaceId; #endif uint32_t in_width; uint32_t in_height; uint32_t in_x; uint32_t in_y; }; struct RGBAValues { double red; double green; double blue; double alpha; }; struct C2dObjects { C2D_OBJECT objects[MAX_OVERLAYS*2]; }; class OverlaySurface { public: OverlaySurface () : width_(0), height_(0), gpu_addr_(nullptr), vaddr_(nullptr), ion_fd_(0), size_(0) { #ifdef OVERLAY_OPEN_CL_BLIT cl_buffer_ = nullptr; blit_inst_ = nullptr; #else // OVERLAY_OPEN_CL_BLIT c2dsurface_id_ = -1; #endif // OVERLAY_OPEN_CL_BLIT } uint32_t width_; uint32_t height_; void * gpu_addr_; void * vaddr_; int32_t ion_fd_; uint32_t size_; #ifdef OVERLAY_OPEN_CL_BLIT cl_mem cl_buffer_; std::shared_ptr blit_inst_; #else // OVERLAY_OPEN_CL_BLIT uint32_t c2dsurface_id_; #endif // OVERLAY_OPEN_CL_BLIT }; //Base class for all types of overlays. class OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItem(int32_t ion_device, OverlayType type, std::shared_ptr &blit); #else // OVERLAY_OPEN_CL_BLIT OverlayItem(int32_t ion_device, OverlayType type); #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItem(); virtual int32_t Init(OverlayParam& param) = 0 ; virtual int32_t UpdateAndDraw() = 0; virtual void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) = 0 ; virtual void GetParameters(OverlayParam& param) = 0; virtual int32_t UpdateParameters(OverlayParam& param) = 0; OverlayType& GetItemType() {return type_; } void MarkDirty(bool dirty); void Activate(bool value); bool IsActive() { return is_active_; } protected: struct IonMemInfo { uint32_t size; int32_t fd; void * vaddr; }; int32_t AllocateIonMemory(IonMemInfo& mem_info, uint32_t size); void FreeIonMemory(void *&vaddr, int32_t &ion_fd, uint32_t size); int32_t MapOverlaySurface(OverlaySurface &surface, IonMemInfo &mem_info, int32_t format); void UnMapOverlaySurface(OverlaySurface &surface); void ExtractColorValues(uint32_t hex_color, RGBAValues* color); virtual int32_t CreateSurface() = 0; void ClearSurface(); virtual void DestroySurface(); int32_t x_; int32_t y_; uint32_t width_; uint32_t height_; OverlaySurface surface_; OverlayLocationType location_type_; bool dirty_; int32_t ion_device_; OverlayType type_; #if USE_CAIRO cairo_surface_t* cr_surface_; cairo_t* cr_context_; #endif private: bool is_active_; }; class OverlayItemStaticImage : public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemStaticImage(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kStaticImage, blit), image_path_() {}; #else // OVERLAY_OPEN_CL_BLIT OverlayItemStaticImage(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kStaticImage), image_path_() {}; #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemStaticImage(); int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: int32_t CreateSurface(); void DestroySurface(); android::String8 image_path_; OverlayImageType image_type_; char * image_buffer_; uint32_t image_size_; uint32_t crop_rect_x_; uint32_t crop_rect_y_; uint32_t crop_rect_width_; uint32_t crop_rect_height_; bool blob_image_dump_enabled_; int32_t blob_buffer_file_fd_; bool blob_buffer_updated_; std::mutex update_param_lock_; }; class OverlayItemDateAndTime: public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemDateAndTime(int32_t ion_device, std::shared_ptr &blit); #else // OVERLAY_OPEN_CL_BLIT OverlayItemDateAndTime(int32_t ion_device); #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemDateAndTime(); int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: static const int kTextSize = 20; static const int kCairoBufferMinWidth = kTextSize * 6; static const int kCairoBufferMinHeight = kTextSize * 2; int32_t CreateSurface(); OverlayDateTimeType date_time_type_; uint32_t text_color_; time_t prev_time_; #if USE_SKIA SkCanvas* canvas_; #endif }; class OverlayItemBoundingBox: public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemBoundingBox(int32_t ion_device, std::shared_ptr &blit); #else // OVERLAY_OPEN_CL_BLIT OverlayItemBoundingBox(int32_t ion_device); #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemBoundingBox(); int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: static const int32_t kBoxBuffWidth = 320; static const int32_t kStrokeWidth = 4; static const int32_t kTextLimit = 20; static const int32_t kTextSize = 25; static const int32_t kTextPercent = 20; static const int32_t kTextMargin = kStrokeWidth + 4; int32_t CreateSurface(); void ClearTextSurface(); void DestroyTextSurface(); uint32_t bbox_color_; #if USE_SKIA SkCanvas* canvas_; #endif android::String8 bbox_name_; uint32_t text_height_ = 0; #if USE_CAIRO OverlaySurface text_surface_; uint32_t box_stroke_width_; cairo_surface_t* text_cr_surface_; cairo_t* text_cr_context_; #endif }; class OverlayItemText: public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemText(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kUserText, blit), text_() {}; #else // OVERLAY_OPEN_CL_BLIT OverlayItemText(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kUserText), text_() {}; #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemText(); int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: static const uint32_t kTextSize = 40; static const uint32_t kCairoBufferMinWidth = kTextSize * 4; static const uint32_t kCairoBufferMinHeight = kTextSize; int32_t CreateSurface(); uint32_t text_color_; std::string text_; #if USE_SKIA SkCanvas* canvas_; #endif }; class OverlayItemPrivacyMask: public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemPrivacyMask(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kPrivacyMask, blit) {}; #else // OVERLAY_OPEN_CL_BLIT OverlayItemPrivacyMask(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kPrivacyMask) {}; #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemPrivacyMask() {}; int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: static const uint32_t kMaskBoxBufWidth = 1920; int32_t CreateSurface(); #if USE_SKIA SkCanvas* canvas_; #endif uint32_t mask_color_; OverlayPrivacyMask config_; }; class OverlayItemGraph : public OverlayItem { public: #ifdef OVERLAY_OPEN_CL_BLIT OverlayItemGraph(int32_t ion_device, std::shared_ptr &blit) : OverlayItem(ion_device, OverlayType::kGraph, blit) {}; #else // OVERLAY_OPEN_CL_BLIT OverlayItemGraph(int32_t ion_device) : OverlayItem(ion_device, OverlayType::kGraph) {}; #endif // OVERLAY_OPEN_CL_BLIT virtual ~OverlayItemGraph() {}; int32_t Init(OverlayParam& param) override; int32_t UpdateAndDraw() override; void GetDrawInfo(uint32_t target_width, uint32_t target_height, std::vector& draw_infos) override; void GetParameters(OverlayParam& param) override; int32_t UpdateParameters(OverlayParam& param) override; private: int32_t CreateSurface(); static const int kDotRadius = 3; static const int kLineWidth = 2; static const int kGraphBufWidth = 480; static const int kGraphBufHeight = 270; uint32_t graph_color_; float downscale_ratio_; OverlayGraph graph_; }; }; // namespace overlay }; // namespace qmmf ================================================ FILE: useful_tricks/rtspsrc_1.md ================================================ # GStreamer源码剖析之——rtspsrc(1) > RTSP URL密码中包含'@'的解决方法。 **前言:**为了让Application的Stream Pipeline具有更好的适应性,能够处理不同格式的输入流,在设计Pipeline时使用了`uridecodebin`作为source plugin,但随着客户越来越多,各种格式的url就暴露出来,最初发现`uridecodebin`的`uri`属性无法处理密码中含有`@`字符的rtsp-url,但是VLC能够正常拉流播放。从本质上来说这是个url解析的问题,我针对这个问题设计了一系列rtsp-url,包含各种特殊字符,这就导致parse函数的实现变得非常困难,但转念一想我们并没有必要处理所有的情况,既然以GStreamer为基础框架,那么针对RTSP流,显然都是使用`rtspsrc`插件来处理的,那么只需要参考`rtspsrc`的实现即可,于是这就成了我`rtspsrc`源码剖析的第一篇。 注1:GStreamer源码托管在GitLab上,Github上的源码只是GitLab的镜像,更新以及分支并不全,因此推荐到GitLab进行阅读源码(当然也可以将源码clone到本地进行阅读)。 注2:由于开发平台是Ubuntu 18.04,Ubuntu18.04默认的GStreamer版本为1.14.5,因此以下解析也选用这一版本,但就这一篇解析内容而言,最新版1.20GStreamer源码并无改动。 ### rtspsrc.c `uridecodebin`在处理RTSP流时,在处理`source-setup`信号时会创建`rtspsrc`插件来处理RTSP流,相关的uri也被传递给`rtspsrc`。通过`gst-inspect-1.0`工具或者是从[rtspsrc properties reference](https://gstreamer.freedesktop.org/documentation/rtsp/rtspsrc.html?gi-language=c#properties)可以猜测我们需要关注的属性是`location`,因为`location`是`rtspsrc`的url属性,于是我们可以在`rtspsrc.c`中着重关注`location`属性值影响的过程。 #### gst_rtspsrc_uri_set_uri 在`rtspstr.c`中搜索`PROC_LOCATION`可知location属性的setter会调用`gst_rtspsrc_uri_set_uri`。在不考虑rtsp-sdp的情况下,核心代码如下: ```c } else { /* try to parse */ GST_DEBUG_OBJECT (src, "parsing URI"); if ((res = gst_rtsp_url_parse (uri, &newurl)) < 0) goto parse_error; } /* if worked, free previous and store new url object along with the original * location. */ GST_DEBUG_OBJECT (src, "configuring URI"); g_free (src->conninfo.location); src->conninfo.location = g_strdup (uri); gst_rtsp_url_free (src->conninfo.url); src->conninfo.url = newurl; g_free (src->conninfo.url_str); if (newurl) src->conninfo.url_str = gst_rtsp_url_get_request_uri (src->conninfo.url); else src->conninfo.url_str = NULL; ``` - uri(location)被直接赋值给`src->conninfo.location`; - 调用`gst_rtsp_url_parse`将uri(location)解析至`GstRTSPUrl`结构体变量newurl中; - 调用`gst_rtsp_url_get_request_uri`将过滤掉user-id和user-pw子段的url赋值给`src->conninfo.url_str`(代码比较简单所以不做展开); 注:仅分析IPV4流地址。 #### gst_rtsp_url_parse (const gchar * urlstr, GstRTSPUrl ** url) 首先看`gst_rtsp_url_parse`的两个参数,一个是urlstr字符串,另一个是`GstRTSPUrl`结构体指针的指针,也即会将urlstr的解析结果存储在传入url变量中并返回。 那么首先来看`GstRTSPUrl`结构体的声明: ```c struct _GstRTSPUrl { GstRTSPLowerTrans transports; GstRTSPFamily family; gchar *user; gchar *passwd; gchar *host; guint16 port; gchar *abspath; gchar *query; }; ``` 可以看到它含有一个rtsp-url的所有的组成片段,包含rtsp验证用的用户名和用户密码,主机地址,端口地址,主/子码流绝对路径,对rtsp服务器的请求。 注:`gst_rtsp_url_parse`源码过长,具体可以浏览[rtspurl.c](https://github.com/GStreamer/gst-plugins-base/blob/master/gst-libs/gst/rtsp/gstrtspurl.c),以下仅分析源码的解析过程。 - malloc一个GstRTSPUrl指针`res`,用于赋给*url; - 首先查找`://`用以确认rtsp的协议以及流地址的起点; - 查找`://`之后的第一个`/`或`?`,标记为`delim`; - 查找`://`和`/`(或`?`)之间的第一个`:`和`@`,分割出`res->user`和`res->passwd`(会对这两个字符串出做ascii转义处理); - 查找`@`之后的第一个`:`,分割出`res->host`和`res->port`; - 根据`delim`是`/`还是`?`,先分割出`res->abspath`,然后再分割出`res->query`。 可以看出`gst_rtsp_url_parse`的解析是比较粗糙的,它首先找到urlstr中`://`后的第一个`/`,并以此为界限划分url,然后就直接通过**第一个**`:`和`@`来分割用户部分和host部分,并且在处理时只用了`strchr`做字符匹配,也即直接使用location是无法处理用户密码中包含`@`的情况的。 这说明我们无法直接通过将uridecodebin的uri属性传给rtspsrc的location属性来解决这个问题,但是从上面的代码可以看到`src->conninfo.url_str`是一条不含user和passwd的url。并且阅读文档可以看到rtspsrc提供了两个额外的属性`user-id`和`user-pw`用户单独存储rtsp的验证信息,于是问题转化成了如何从传递给uridecodebin的uri中分割出user和passwd。 #### 测试 - `user-id`和`user-pw`的使用,测试通过。 - 转义URL:根据[elasticRTC - cannot play streams with @ in rtsp password](https://github.com/Kurento/bugtracker/issues/173)这个issue提供的解决方案,需要用户将账号密码中的特殊字符全部提前转义成ASCII码即可,例如将`@`转义为`%40`,测试通过。 #### 约束 注1:以下约束以海康为测试摄像头。 注2:以官方给出的url格式`rtsp[u]://[user:passwd@]host[:port]/abspath[?query]`为标准。 - `passwd`字段中不能含有`/`: 从`rtspurl.c`的实现来看,它也并非是完美的,通过上文的源码剖析可以知道,`gst_rtsp_url_parse()`是以`/`为分割点,区分IP地址和码流绝对路径两部分内容,也即在这个`/`之前的IP地址即用户信息中是不能包含`/`的。虽然摄像头的密码支持添加特殊字符,但实际我们在密码中加入`/`后,无论是GStreamer的rtspsrc和VLC都无法正确解析URL通过RTSP流的用户认证。 - `user-id`字段中不能含有特殊字符: `gst_rtsp_url_parse()`对`user`和`passwd`的唯一特殊处理在于调用了`g_uri_unescape_segment()`使得其能够支持例如**使用ASCII码等编码格式转义后的用户名和密码**,但是这需要用户提前将`user`和`passwd`字段进行转义,这对于研发人员来说并不困难,但对于普通用户来说会显得比较麻烦。但是进一步的我们在摄像头管理页面尝试修改`user-id`字段使其包含特殊字符时,页面会提示不支持,所以我直接沿用这一约束。 **有了如上约束之后,我们可以以****`://`**之后的第一个**`/`**为基准,找这个**`/`**之前的第一个**`@`**(或者是最后一个)作为passwd和host的分割点,进而再找第一个**`:`**将user-id和user-pw分割出来,然后在uridecodebin的source-setup回调中传给rtspsrc即可。** ================================================ FILE: useful_tricks/uridecodebin_1.md ================================================ # GStreamer源码剖析——uridecodebin(1) > How to enable software decoder with uridecedebin. **前言:**Gstreamer中的`uridecodebin`插件使用非常方便,可根据给定的uri,自动选择合适的不同媒体解封装组件和音视频解码器,从而屏蔽了不同媒体的封装类型和解码器的类型。查阅uridecodebin的文档,可以知道`uridecodebin`在选择满足需求的插件的时候,会按照插件的rank值高低,选择高rank值的插件,而通常而言硬解的rank被定义为primer(256) + 1,比软解rank更高,因此在存在硬解的平台上,`uridecedebin`会优先选择硬解。但是为了能够进一步压榨硬件性能,跑更多路码流,所以希望将软解也应用上,但又不想单独再维护一条新的pipeline,于是考虑到整个应用的其他部分大致相同(可复用),开始调研`uridecodebin`使用软解的方法。 ### force-sw-decoders 根据官方文档[uridecodebin](https://gstreamer.freedesktop.org/documentation/playback/uridecodebin.html?gi-language=c#uridecodebin:force-sw-decoders)可以看到,官方已经为`uridecodebin`实现了软解的使能接口,只需要将`force-sw-decoders`属性设为true即可。但是我的开发板是Ubuntu18.04,推荐的GStreamer版本是14.05,我使用`gst-inspect-1.0`工具查看`uridecodebin`是不存在`force-sw-decoders`属性的,于是我就去GitLab上对比了1.18和1.14的`uridecodebin`源码,发现这个属性是在1.18版本才开始支持的,所以需要另辟蹊径。于是我找到`force-sw-decoders`属性的相关核心代码,如下所示: ```c /* Must be called with factories lock! */ static void gst_uri_decode_bin_update_factories_list (GstURIDecodeBin * dec) { guint32 cookie; GList *factories, *tmp; cookie = gst_registry_get_feature_list_cookie (gst_registry_get ()); if (!dec->factories || dec->factories_cookie != cookie) { if (dec->factories) gst_plugin_feature_list_free (dec->factories); factories = gst_element_factory_list_get_elements (GST_ELEMENT_FACTORY_TYPE_DECODABLE, GST_RANK_MARGINAL); if (dec->force_sw_decoders) { /* filter out Hardware class elements */ dec->factories = NULL; for (tmp = factories; tmp; tmp = g_list_next (tmp)) { GstElementFactory *factory = GST_ELEMENT_FACTORY_CAST (tmp->data); if (!gst_element_factory_list_is_type (factory, GST_ELEMENT_FACTORY_TYPE_HARDWARE)) { dec->factories = g_list_prepend (dec->factories, factory); } else { gst_object_unref (factory); } } g_list_free (factories); } else { dec->factories = factories; } dec->factories = g_list_sort (dec->factories, gst_playback_utils_compare_factories_func); dec->factories_cookie = cookie; } } ``` 关注第21行开始的代码,当我们希望`uridecedebin`使用软解时,会通过`gst_element_factory_list_is_type (factory, GST_ELEMENT_FACTORY_TYPE_HARDWARE)`来将硬解码器从`uridecodebin`的autoplug-fatories中过滤掉。 ### autoplug-sort 上面说到`uridecodebin`的`force-sw-decoders`是从解码器候选列表中过滤掉所有的硬解码器,那么我们是否可以在自己的Pipeline中实现相同的操作呢,答案是肯定的。 ![img](images/spaces%2F-MhpeWY7MyqU9wFIiSlD%2Fuploads%2FAy02TJr1F9hPCwrE5eLp%2Fimage.png) 从图中红框的描述可以知道,`autoplug-sort`信号允许用户重定义(新增或过滤)原有的`autoplug-factories`列表中的插件,并返回给`uridecedebin`,从而使得`uridecedebin`根据新的插件候选列表进行pipeline的构造。 ### autoplug-select ![img](images/spaces%2F-MhpeWY7MyqU9wFIiSlD%2Fuploads%2Fphs3wboDq69rwFUO5b26%2Fimage.png) 当uridecodebin准备选择某个插件的时候会发出`autoplug-select`信号,这里如果返回是`GST_AUTOPLUG_SELECT_TRY`,则uridecodebin会选择该插件,如果返回是`GST_AUTOPLUG_SELECT_SKIP`,则uridecodebin会跳过该插件,选择下一个满足条件的插件。 注1:截图中的返回值类型的`GstAutoplugSelectResult *`是错误的,正确的返回值是`GstAutoplugSelectResult`。 注2:`GstAutoplugSelectResult`被定义在`gst-plugins-base/gstplayback/gstplay-enum.h`头文件中,但这个头文件不属于GStreamer的开发包内容,所以需要用户自定一个相同的枚举类型。 ### 后记 `uridecodebin`使用软解的问题得到解决了,但是在替换插件的时候不能仅从单个插件的角度来考虑,我们必须清楚将硬解替换成软解是否会影响整条pipeline甚至是整个应用程序的运行。因为表面上你替换掉的只是一个解码器插件,但实际不同的插件由于内部实现不同,会对整条pipeline产生影响,尤其是在应用开发过程中,硬解这类与硬件相关的插件,往往使用的内存不是普通内存,导致应用在实现伊始就可能就为了硬解做了相应的适配,所以很可能虽然你能够替换解码器,但会影响到应用其他部分的正常运行。